Fusion Middleware Documentation
Advanced Search


Administering HTTP Session Management with Oracle Coherence*Web
Close Window

Table of Contents

Show All | Collapse

5 Coherence*Web Session Management Features

This chapter describes the features of Coherence*Web, including session models, session scoping, session locking, deployment topologies, and logging. You can configure Coherence*Web in many ways to meet the demands of your environment. Consequently, you might have to change some default configuration options. This chapter provides an in-depth look at the features that Coherence*Web supports so that you can make the appropriate configuration and deployment decisions.

5.1 Session Models

A session model describes how Coherence*Web stores the session state in Coherence. Session data is managed by an HttpSessionModel object while the session collection in a Web application is managed by an HttpSessionCollection object. You must configure only the collection type in the web.xml file—the model is implicitly derived from the collection type. Coherence*Web includes these different session model implementations:

  • Monolithic Model, which stores all session state as a single entity, serializing and deserializing all attributes as a single operation

  • Traditional Model, which stores all session state as a single entity but serializes and deserializes attributes individually

  • Split Model, which extends the Traditional Model, but separates the larger session attributes into independent physical entities

These sections provide additional information on session models:

Note:

In general, Web applications that are part of the same Coherence cluster must use the same session model type. Inconsistent configurations could result in deserialization errors.

Figure 5-1 illustrates the three session models.

Figure 5-1 Traditional, Monolithic, and Split Session Models

Traditional, Monolithic, and Split Session Models
Description of "Figure 5-1 Traditional, Monolithic, and Split Session Models"

5.1.1 Monolithic Model

The Monolithic model is represented by the MonolithicHttpSessionModel and MonolithicHttpSessionCollection objects. These are similar to the Traditional model, except that they solve the shared object issue by serializing and deserializing all attributes into a single object stream. As a result, the Monolithic model often does not perform as well as the Traditional model.

Figure 5-2 illustrates the relationship between the logical representation of data and its physical representation in the session storage cache. In its logical representation session data consists of metadata, and various attributes. In its physical representation in the session storage cache, the metadata and attributes are serialized into a single stream. A session ID is associated with the metadata and attributes.

Figure 5-2 Monolithic Session Model

Monolithic Session Model
Description of "Figure 5-2 Monolithic Session Model"

5.1.2 Traditional Model

The Traditional model is represented by the TraditionalHttpSessionModel and TraditionalHttpSessionCollection objects. The TraditionalHttpSessionCollection object stores an HTTP session object in a single cache, but serializes each attribute independently.

This model is suggested for applications with relatively small HTTP session objects (10 KB or less) that do not have issues with object sharing between session attributes. Object sharing between session attributes occurs when multiple attributes of a session have references to the same exact object, meaning that separate serialization and deserialization of those attributes cause multiple instances of that shared object to exist when the HTTP session is later deserialized.

Figure 5-3 illustrates the relationship between the logical representation of data and its physical representation in the session storage cache. In its logical representation session data consists of metadata, and various attributes. In its physical representation in the session storage cache, the metadata and attributes are converted to binaries, and a session ID is associated with them. Note that the attributes are serialized individually instead of as a single binary BLOB (such as in the Monolithic case).

Figure 5-3 Traditional Session Model

Traditional Session Model
Description of "Figure 5-3 Traditional Session Model"

5.1.3 Split Model

The Split model is represented by the SplitHttpSessionModel and SplitHttpSessionCollection objects. SplitHttpSessionCollection is the default used by Coherence*Web.

These models store the core HTTP session metadata and all of the small session attributes in the same manner as the Traditional model, thus ensuring high performance by keeping that block of binary session data small. All large attributes are split into separate cache entries to be managed individually, thus supporting very large HTTP session objects without unduly increasing the amount of data that must be accessed and updated within the cluster for each request. In other words, only the large attributes that are modified within a particular request incur any network overhead for their updates, and (because it uses near caching) the Split model generally does not incur any network overhead for accessing either the core HTTP session data or any of the session attributes.

Figure 5-4 illustrates the relationship between the logical representation of data and its physical representation in the session storage cache. In this model, large objects are stored as separate cache entries with their own session ID.

Figure 5-4 Split Session Model

Split Session Model
Description of "Figure 5-4 Split Session Model"

5.1.4 Session Model Recommendations

The following are recommendations on which session model to choose for your applications:

  • The Split model is the recommended session model for most applications.

  • The Traditional model might be more optimal for applications that are known to have small HTTP session objects.

  • The Monolithic model is designed to solve a specific class of problems related to multiple session attributes that have references to the same shared object, and that must maintain that object as a shared object.

Note:

See Appendix A, "Coherence*Web Context Parameters" for descriptions of the parameters used to configure session models.

5.1.5 Configuring a Session Model

By default, Coherence*Web uses the split session model, where large attributes are split into separate cache entries to be managed individually. You can change the session model used by Coherence*Web by configuring the -Dcoherence.sessioncollection.class system property or by setting the equivalent coherence-sessioncollection-class context parameter in the Web application's web.xml file. As the value of the context parameter (or system property), use the fully-qualified class name of the HttpSessionCollection implementation.

Example 5-1 illustrates a web.xml entry to configure the Monolithic model.

Example 5-1 Configuring the Session Model

...
<context-param>
   <param-name>coherence-sessioncollection-class</param-name>
   <param-value>com.tangosol.coherence.servlet.MonolithicHttpSessionCollection</param-value>
</context-param>
...

5.1.6 Sharing Data in a Clustered Environment

Clustering can boost scalability and availability for applications. Clustering solutions such as Coherence*Web solve many problems for developers, but successful developers must be aware of the limitations of the underlying technology, and how to manage those limitations. Understanding what the platform provides, and what users require, gives developers the ability to eliminate the gap between the two.

Session attributes must be serializable if they are to be processed across multiple JVMs, which is a requirement for clustering. It is possible to make some fields of a session attribute non-clustered by declaring those fields as transient. While this eliminates the requirement for all fields of the session attributes to be serializable, it also means that these attributes are not fully replicated to the backup server(s). Developers who follow this approach should be very careful to ensure that their applications are capable of operating in a consistent manner even if these attribute fields are lost. In most cases, this approach ends up being more difficult than simply converting all session attributes to serializable objects. However, it can be a useful pattern when very large amounts of user-specific data are cached in a session.

The Java EE Servlet specification (versions 2.2, 2.3, and 2.4) states that the servlet context should not be shared across the cluster. Non-clustered applications that rely on the servlet context as a singleton data structure have porting issues when moving to a clustered environment.

A more subtle issue that arises in clustered environments is the issue of object sharing. In a non-clustered application, if two session attributes reference a common object, changes to the shared object are visible as part of both session attributes. However, this is not the case in most clustered applications. To avoid unnecessary use of compute resources, most session management implementations serialize and deserialize session attributes individually on demand. Coherence*Web (Traditional and Split session models) normally operates in this manner. If two session attributes that reference a common object are separately deserialized, the shared common object is instantiated twice. For applications that depend on shared object behavior and cannot be readily corrected, Coherence*Web provides the option of a Monolithic session model, which serializes and deserializes the entire session object as a single operation. This provides compatibility for applications that were not originally designed with clustering in mind.

Many projects require sharing session data between different Web applications. The challenge that arises is that each Web application typically has its own class loader. Consequently, objects cannot readily be shared between separate Web applications. There are two general methods used as a work around, each with its own set of trade-offs.

  • Place common classes in the Java CLASSPATH, allowing multiple applications to share instances of those classes at the expense of a slightly more complicated configuration.

  • Use Coherence*Web to share session data across class loader boundaries. Each Web application is treated as a separate cluster member, even if they run within the same JVM. This approach provides looser coupling between Web applications (assuming serialized classes share a common serial Version UID), but suffers from a performance impact because objects must be serialized-deserialized for transfer between cluster members.

5.1.7 Scalability and Performance

Moving to a clustered environment makes session size a critical consideration. Memory usage is a factor regardless of whether an application is clustered or not, but clustered applications must also consider the increased CPU and network load that larger sessions introduce. While non-clustered applications using in-memory sessions are not required to serialize-deserialize session state, clustered applications must do this every time session state is updated. Serializing session state and then transmitting it over the network becomes a critical factor in application performance. For this reason and others, a server should generally limit session size to no more than a few kilobytes.

While the Traditional and Monolithic session models for Coherence*Web have the same limiting factor, the Split session model was explicitly designed to efficiently support large HTTP sessions. Using a single clustered cache entry to contain all of the small session attributes means that network traffic is minimized when accessing and updating the session or any of its smaller attributes. Independently deserializing each attribute means that CPU usage is minimized. By splitting out larger session attributes into separate clustered cache entries, Coherence*Web ensures that the application only pays the cost for those attributes when they are actually accessed or updated. Additionally, because Coherence*Web leverages the data management features of Coherence, all of the underlying features are available for managing session attributes, such as near caching, NIO buffer caching, and disk-based overflow.

Figure 5-5 illustrates performance as a function of session size. Each session consists of ten 10-character Strings and from zero to four 10,000-character Strings. Each HTTP request reads a single small attribute and a single large attribute (for cases where there are any in the session), and 50 percent of requests update those attributes. Tests were performed on a two-server cluster. Note the similar performance between the Traditional and Monolithic models; serializing-deserializing Strings consumes minimal CPU resources, so there is little performance gain from deserializing only the attributes that are actually used. The performance gain of the Split model increases to over 37:1 by the time session size reaches one megabyte (100 large Strings). In a clustered environment, it is particularly true that application requests that access only essential data have the opportunity to scale and perform better; this is part of the reason that sessions should be kept to a reasonable size.

Figure 5-5 Performance as a Function of Session Size

The preceeding text describes the graphic.

Another optimization is the use of transient data members in session attribute classes. Because Java serialization routines ignore transient fields, they provide a very convenient means of controlling whether session attributes are clustered or isolated to a single cluster member. These are useful in situations where data can be "lazy loaded" from other data sources (and therefore recalculated during a server failover process), and also in scenarios where absolute reliability is not critical. If an application can withstand the loss of a portion of its session state with zero (or acceptably minimal) impact on the user, then the performance benefit may be worth considering. In a similar vein, it is not uncommon for high-scale applications to treat session loss as a session timeout, requiring the user to log back in to the application (which has the implicit benefit of properly setting user expectations regarding the state of their application session).

Sticky load balancing plays a critical role because session state is not globally visible across the cluster. For high-scale clusters, user requests normally enter the application tier through a set of stateless load balancers, which redistribute (more or less randomly) these requests across a set of sticky load balancers, such as Microsoft IIS or Apache HTTP Server. These sticky load balancers are responsible for the more computationally intense act of parsing the HTTP headers to determine which server instance is processing the request (based on the server ID specified by the session cookie). If requests are misrouted for any reason, session integrity is lost. For example, some load balancers may not parse HTTP headers for requests with large amounts of POST data (for example, more than 64KB), so these requests are not routed to the appropriate server instance. Other causes of routing failure include corrupted or malformed server IDs in the session cookie. Most of these issues can be handled with proper selection of a load balancer and designing tolerance into the application whenever possible (for example, ensuring that all large POST requests avoid accessing or modifying session state).

Sticky load balancing aids the performance of Coherence*Web but is not required. Because Coherence*Web is built on the Coherence data management platform, all session data is globally visible across the cluster. A typical Coherence*Web deployment places session data in a near cache topology, which uses a partitioned cache to manage huge amounts of data in a scalable and fault-tolerant manner, combined with local caches in each application server JVM to provide instant access to commonly used session state. While a sticky load balancer is not required when Coherence*Web is used, there are two key benefits to using one. Due to the use of near cache technology, read access to session attributes is instant if user requests are consistently routed to the same server, as using the local cache avoids the cost of deserialization and network transfer of session attributes. Additionally, sticky load balancing allows Coherence to manage concurrency locally, transferring session locks only when a user request is rebalanced to another server.

5.2 Session and Session Attribute Scoping

Coherence*Web allows fine-grained control over how both session data and session attributes are scoped (or shared) across application boundaries.

5.2.1 Session Scoping

Coherence*Web allows session data to be shared by different Web applications deployed in the same or different Web containers. To do so, you must correctly configure the session cookie context parameters and make the classes of objects stored in session attributes available to each Web application.

If you are using cookies to store session IDs (that is, you are not using URL rewriting), you must set the session cookie path to a common context path for all Web applications that share session data. For example, to share session data between two Web applications registered under the context paths /web/HRPortal and /web/InWeb, you should set the coherence-session-cookie-path parameter to /web. On the other hand, if the two Web applications are registered under the context paths /HRPortal and /InWeb, you should set the coherence-session-cookie-path parameter to a slash (/).

If the Web applications that you would like to share session data are deployed on different Web containers running on different machines (that are not behind a common load balancer), you must also configure the session cookie domain to a domain shared by the machines. For example, to share session data between two Web applications running on server1.mydomain.com and server2.mydomain.com, you must set the coherence-session-cookie-domain context parameter to
.mydomain.com
.

To correctly serialize or deserialize objects stored in shared sessions, the classes of all objects stored in session attributes must be available to Web applications that share session data.

Note:

For advanced use cases where EAR cluster node-scoping or application server JVM cluster scoping is employed and you do not want session data shared across individual Web applications, see "Preventing Web Applications from Sharing Session Data".

5.2.1.1 Preventing Web Applications from Sharing Session Data

Sometimes you might want to explicitly prevent HTTP session data from being shared by different Java EE applications that participate in the same Coherence cluster. For example, assume you have two applications, HRPortal and InWeb, that share cached data in their Enterprise JavaBeans (EJB) tiers but use different session data. In this case, it is desirable for both applications to be part of the same Coherence cluster, but undesirable for both applications to use the same clustered service for session data. One way to do this is to use the ApplicationScopeController interface to define the scope of an application's attributes. "Session Attribute Scoping" describes this technique. Another way is to specify a unique session cache service name for each application.

Follow these steps to specify a unique session cache service name for each application:

  1. Locate the <service-name/> elements in each default-session-cache-config.xml file found in your application.

  2. Set the elements to a unique value for each application.

    This forces each application to use a separate clustered service for session data.

  3. Include the modified default-session-cache-config.xml file with the application.

Example 5-2 illustrates a sample default-session-cache-config.xml file for an HRPortal application. To prevent the HRPortal application from sharing session data with the InWeb application, rename the <service-name> element for the replicated scheme to ReplicationSessionsMiscHRP. Rename the <service-name> element for the distributed schemes to DistributedSessionsHRP.

Example 5-2 Configuration to Prevent Applications from Sharing Session Data

<replicated-scheme>
  <scheme-name>default-replicated</scheme-name>
  <service-name>ReplicatedSessionsMisc</service-name> // rename this to ReplicatedSessionsMiscHRP 
  <backing-map-scheme>
    <class-scheme>
      <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
  </backing-map-scheme>
</replicated-scheme>

<distributed-scheme>
  <scheme-name>session-distributed</scheme-name>
  <service-name>DistributedSessions</service-name> // rename this to DistributedSessionsHRP
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <class-scheme>
      <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
  </backing-map-scheme>
</distributed-scheme>

<distributed-scheme>
  <scheme-name>session-certificate</scheme-name>
  <service-name>DistributedSessions</service-name> // rename this to DistributedSessionsHRP
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <local-scheme>
      <scheme-ref>session-certificate-autoexpiring</scheme-ref>
    </local-scheme>
  </backing-map-scheme>
</distributed-scheme>

5.2.1.2 Working with Multiple Cache Configurations

If you are working with two or more applications running under Coherence*Web, then they could have multiple different cache configurations. In this case, the cache configuration on the cache server must contain the union of these cache configurations regardless of whether you run in storage-enabled or storage-disabled mode. This will allow the applications to be supported in the same cache cluster.

5.2.1.3 Keeping Session Cookies Separate

If you are using cookies to store session IDs, you must ensure that session cookies created by one application are not propagated to another application. To do this, you must set each application's session cookie domain and path in their web.xml file. To prevent cookies from being propagated, ensure that no two applications share the same context path.

For example, assume you have two Web applications registered under the context paths /web/HRPortal and /web/InWeb. To prevent the Web applications from sharing session data through cookies, set the cookie path to /web/HRPortal in one application, and set the cookie path to /web/InWeb in the other application.

If your applications are deployed on different Web containers running on separate machines, then you can configure the cookie domain to ensure that they are not in the same domain.

For example, assume you have two Web applications running on server1.mydomain.com and server2.mydomain.com. To prevent session cookies from being shared between them, set the cookie domain in one application to server1.mydomain.com, and set the cookie domain in the other application to server2.mydomain.com.

5.2.2 Session Attribute Scoping

In the case where sessions are shared across Web applications there are many instances where the application might scope individual session attributes so that they are either globally visible (that is, all Web applications can see and modify these attributes) or scoped to an individual Web application (that is, not visible to any instance of another application).

Coherence*Web provides the ability to control this behavior by using the AttributeScopeController interface. This optional interface can selectively scope attributes in cases when a session might be shared across multiple applications. This allows different applications to potentially use the same attribute names for the application-scope state without accidentally reading, updating, or removing other applications' attributes. In addition to having application-scoped information in the session, this interface allows the session to contain global (unscoped) information that can be read, updated, and removed by any of the applications that shares the session.

Two implementations of the AttributeScopeController interface are available: ApplicationScopeController and GlobalScopeController. The GlobalScopeController implementation does not scope attributes, while ApplicationScopeController scopes all attributes to the application by prefixing the name of the application to all attribute names.

Use the coherence-application-name context parameter to specify the name of the application (and the Web module in which the application appears). The ApplicationScopeController interface will use the name of the application to scope the attributes. If you do not configure this parameter, then Coherence*Web uses the name of the class loader instead. For more information, see the description of coherence-application-name in Table 2-2.

Note:

After a configured AttributeScopeController implementation is created, it is initialized with the name of the Web application, which it can use to qualify attribute names. Use the coherence-application-name context parameter to configure the name of your Web application.

5.2.2.1 Sharing Session Information Between Multiple Applications

Coherence*Web allows multiple applications to share the same session object. To do this, the session attributes must be visible to all applications. You must also specify which URLs served by WebLogic Server will be able to receive cookies.

To allow the applications to share and modify the session attributes, reference the GlobalScopeController (com.tangosol.coherence.servlet.AbstractHttpSessionCollection$GlobalScopeController) interface as the value of the coherence-scopecontroller-class context parameter in the web.xml file. GlobalScopeController is an implementation of the com.tangosol.coherence.servlet.HttpSessionCollection$AttributeScopeController interface that allows individual session attributes to be globally visible.

Example 5-3 illustrates the GlobalScopeController interface specified in the web.xml file.

Example 5-3 GlobalScopeController Specified in the web.xml File

<?xml version="1.0" encoding="UTF-8"?>  <web-app>    ...
    <context-param>
      <param-name>coherence-scopecontroller-class</param-name>
      <param-value>com.tangosol.coherence.servlet. AbstractHttpSessionCollection$GlobalScopeController</param-value>
    </context-param>
    ...
  </web-app>

5.3 Cluster Node Isolation

There are several different ways in which you can deploy Coherence*Web. One of the things to consider when deciding on a deployment option is cluster node isolation. Cluster node isolation considers:

  • The number of Coherence nodes that are created within an application server JVM

  • Where the Coherence library is deployed

Applications can be application server-scoped, EAR-scoped, or WAR-scoped. This section describes these considerations. For detailed information about the XML configuration for each of these options, see "Configure Coherence*Web Storage Mode".

5.3.1 Application Server-Scoped Cluster Nodes

With this configuration, all deployed applications in a container using Coherence*Web become part of one Coherence node. This configuration produces the smallest number of Coherence nodes in the cluster (one for each Web container JVM) and, because the Coherence library (coherence.jar) is deployed in the container's class path, only one copy of the Coherence classes is loaded into the JVM. This minimizes the use of resources. On the other hand, because all applications are using the same cluster node, all applications are affected if one application malfunctions.

Figure 5-6 illustrates an application server-scoped cluster with two cluster nodes (application server instances). Because Coherence*Web has been deployed to each instance's class path, each instance can be considered to be a Coherence node. Each node contains two EAR files; each EAR file contains two WAR files. All of the application running in each instance share the same Coherence library and classes.

Figure 5-6 Application Server-Scoped Cluster

Application Server-Scoped Cluster
Description of "Figure 5-6 Application Server-Scoped Cluster"

For WebLogic Server, all Coherence*Web-enabled applications have application server scope. "Configure Coherence*Web Storage Mode" describes the XML configuration requirements for application server-scoped cluster nodes for WebLogic Server.

All Coherence*Web-enabled applications have application server scope. Application server scope is not available for GlassFish Server.

Note:

Consider the use of the application server-scoped cluster configuration very carefully. Do not use it in environments where application interaction is unknown or unpredictable.

An example of such an environment might be a deployment where multiple application teams are deploying applications written independently, without carefully coordinating and enforcing their conventions and naming standards. With this configuration, all applications are part of the same cluster—the likelihood of collisions between namespaces for caches, services, and other configuration settings is quite high and could lead to unexpected results.

For these reasons, Oracle Coherence strongly recommends that you use EAR-scoped and WAR-scoped cluster node configurations. If you are in doubt regarding which deployment topology to choose, or if this warning applies to your deployment, then do not choose the application server-scoped cluster node configuration.

5.3.2 EAR-Scoped Cluster Nodes

With this configuration, all deployed applications within each EAR file become part of one Coherence node. This configuration produces one Coherence node for each deployed EAR file that uses Coherence*Web. Because the Coherence library (coherence.jar) is deployed in the application's classpath, only one copy of the Coherence classes is loaded for each EAR file. Since all Web applications in the EAR file use the same cluster node, all Web applications in the EAR file are affected if one of the Web applications malfunctions.

Figure 5-7 illustrates four EAR-scoped cluster nodes. Since Coherence*Web has been deployed to each EAR file, each EAR file becomes a cluster node. All applications running inside each EAR file have access to the same Coherence libraries and classes.

Figure 5-7 EAR-Scoped Cluster

EAR-Scoped Cluster
Description of "Figure 5-7 EAR-Scoped Cluster"

EAR-scoped cluster nodes reduce the deployment effort because no changes to the application server class path are required. This option is also ideal if you plan to deploy only one EAR file to an application server.

For more information on XML configuration requirements for EAR-scoped cluster nodes, see "Configuring EAR-Scoped Cluster Nodes".

Note:

This configuration is not available for Coherence*Web applications running on the WebLogic Server platform. Applications running on the WebLogic Server platform can be only application server-scoped.

5.3.3 WAR-Scoped Cluster Nodes

With this configuration, each deployed Web application becomes its own Coherence node. This configuration produces the largest number of Coherence nodes in the cluster (one for each deployed WAR file that uses Coherence*Web) and because the Coherence library (coherence.jar) is deployed in the Web application's class path, there will be as many copies of the Coherence classes loaded as there are deployed WAR files. This results in the largest resource utilization of the three options. However, because each deployed Web application is its own cluster node, Web applications are completely isolated from other potentially malfunctioning Web applications.

WAR scoped cluster nodes reduce the deployment effort because no changes to the application server class path are required. This option is also ideal if you plan to deploy only one WAR file to an application server.

Figure 5-8 illustrates two different configurations of WAR files in application servers. Because each WAR file contains a copy of Coherence*Web (and Coherence), it can be considered a cluster node.

Figure 5-8 WAR-Scoped Clusters

WAR-Scoped Clusters
Description of "Figure 5-8 WAR-Scoped Clusters"

For more information on XML configuration requirements for WAR-scoped cluster nodes, see "Configuring WAR-Scoped Cluster Nodes".

Note:

This configuration is not available for Coherence*Web applications running on the WebLogic Server platform. Applications running on the WebLogic Server platform can be only application server-scoped.

5.4 Session Locking Modes

Oracle Coherence provides the following configuration options for concurrent access to HTTP sessions.

  • Optimistic Locking, which allows concurrent access to a session by multiple threads in a single member or multiple members, while prohibiting concurrent modification.

  • Last-Write-Wins Locking, which is a variation of Optimistic Locking. This allows concurrent access to a session by multiple threads in a single member or multiple members. In this case, the last write is saved. This is the default locking mode.

  • Member Locking, which allows concurrent access and modification of a session by multiple threads in the same member, while prohibiting concurrent access by threads in different members.

  • Application Locking, which allows concurrent access and modification of a session by multiple threads in the same Web application instance, while prohibiting concurrent access by threads in different Web application instances.

  • Thread Locking, which prohibits concurrent access and modification of a session by multiple threads in a single member.

Note:

Generally, Web applications that are part of the same cluster must use the same locking mode and sticky session optimizations setting. Inconsistent configurations could result in deadlock.

You can specify the session locking mode used by your Web applications by setting the coherence-session-locking-mode context parameter. Table 5-1 lists the context parameter values and the corresponding session locking modes they specify. For more information about the coherence-session-locking-mode context parameter, see the following sections and Appendix A, "Coherence*Web Context Parameters."

Table 5-1 Summary of coherence-session-locking-mode Context Parameter Values

Locking Mode coherence-session-locking-mode Values

Optimistic Locking

optimistic

Last-Write-Wins Locking

none

Member Locking

member

Application Locking

app

Thread Locking

thread


5.4.1 Optimistic Locking

Optimistic Locking mode allows multiple Web container threads in one or more members to access the same session concurrently. This setting does not use explicit locking; rather an optimistic approach is used to detect and prevent concurrent updates upon completion of an HTTP request that modifies the session. The exception ConcurrentModificationException is thrown when the session is flushed to the cache, which is after the Servlet request has finished processing. To view the exception, set the weblogic.debug.DebugHttpSessions system property to true in the container's startup script (for example: -Dweblogic.debug.DebugHttpSessions=true).

The Optimistic Locking mode can be configured by setting the coherence-session-locking-mode parameter to optimistic.

5.4.2 Last-Write-Wins Locking

Coherence*Web and the Coherence*Web SPI are configured with Last-Write Wins Locking by default. Last-Write-Wins Locking mode is a variation on the Optimistic Locking mode. It allows multiple Web container threads in one or more members to access the same session concurrently. This setting does not use explicit locking; it does not prevent concurrent updates upon completion of an HTTP request that modifies the session. Instead, the last write, that is, the last modification made, is allowed to modify the session.

The Last-Write-Wins Locking mode can be configured by setting the coherence-session-locking-mode parameter to none. This value will allow concurrent modification to sessions with the last update being applied.

5.4.3 Member Locking

The Member Locking mode allows multiple Web container threads in the same cluster node to access and modify the same session concurrently, but prohibits concurrent access by threads in different members. This is accomplished by acquiring a member-level lock for an HTTP session when the session is acquired. The lock is released on completion of the of the HTTP request. For more information about member-level locks, see <lease-granularity> in the "distributed-scheme" section of Oracle Fusion Middleware Developing Applications with Oracle Coherence Oracle.

The Member Locking mode can be configured by setting the coherence-session-locking-mode parameter to member.

5.4.4 Application Locking

The Application Locking mode restricts session access (and modification) to threads in a single Web application instance at a time. This is accomplished by acquiring both a member-level and application-level lock for an HTTP session when the session is acquired, and releasing both locks upon completion of the HTTP request. For more information about member-level locks, see <lease-granularity> in the "distributed-scheme" section of Oracle Fusion Middleware Developing Applications with Oracle Coherence.

The Application Locking mode can be configured by setting the coherence-session-locking-mode parameter to app.

5.4.5 Thread Locking

Thread Locking mode restricts session access (and modification) to a single thread in a single member at a time. This is accomplished by acquiring both a member level, application-level, and thread-level lock for an HTTP session when the session is acquired, and releasing all three locks upon completion of the request. For more information about member-level locks, see <lease-granularity> in the "distributed-scheme" section of the Oracle Fusion Middleware Developing Applications with Oracle Coherence.

The Thread Locking mode can be configured by setting the coherence-session-locking-mode parameter to thread.

5.4.6 Troubleshooting Locking in HTTP Sessions

Enabling Member, Application, or Thread Locking for HTTP session access indicates that Coherence*Web will acquire a clusterwide lock for every HTTP request that requires access to a session. By default, threads that attempt to access a locked session (locked by a thread in a different member) block access until the lock can be acquired. If you want to enable a timeout for lock acquisition, configure it with the coherence-session-get-lock-timeout context parameter, for example:

...  
<context-param>
    <param-name>coherence-session-get-lock-timeout</param-name>
    <param-value>30</param-value>
  </context-param>
...

Many Web applications do not have such a strict concurrency requirement. For these applications, using the Optimistic Locking mode has the following advantages:

  • The overhead of obtaining and releasing clusterwide locks for every HTTP request is eliminated.

  • Requests can be load-balanced away from failing or unresponsive members to active members without requiring the unresponsive member to release the clusterwide lock on the session.

Coherence*Web provides a diagnostic invocation service that is executed when a member cannot acquire the cluster lock for a session. You can control if this service is enabled by setting the coherence-session-log-threads-holding-lock context parameter. If this context parameter is set to true (default), then the invocation service will cause the member that has ownership of the session to log the stack trace of the threads that are currently holding the lock.

Note that the coherence-session-log-threads-holding-lock context parameter is available only when the coherence-sticky-sessions context parameter is set to true. This requirement exists because Coherence Web will acquire a cluster-wide lock for every session access request unless sticky session optimization is enabled. By enabling sticky session optimization, frequent lock-holding, and the subsequent production of numerous log files, can be avoided.

Like all Coherence*Web messages, the Coherence logging-config operational configuration element controls how the message is logged. For more information on how to configure logging in Coherence, see the description of logging-config, in "Operation Configuration Elements" in Oracle Fusion Middleware Developing Applications with Oracle Coherence.

5.4.7 Enabling Sticky Session Optimizations

If Member, Application, or Thread Locking is a requirement for a Web application that resides behind a sticky load balancer, Coherence*Web provides an optimization for obtaining the clusterwide lock required for HTTP session access. By definition, a sticky load balancer attempts to route each request for a given session to the same application server JVM that it previously routed requests to for that same session. This should be the same application server JVM that created the session. The sticky session optimization takes advantage of this behavior by retaining the clusterwide lock for a session until the session expires or until it is asked to release it. If, for whatever reason, the sticky load balancer sends a request for the same session to another application server JVM, that JVM will ask the JVM that owns the lock on the session to release the lock as soon as possible. For more information, see the SessionOwnership entry in Table C-2.

Sticky session optimization can be enabled by setting the coherence-sticky-sessions context parameter to true. This setting requires that Member, Application, or Thread Locking is enabled.

5.5 Deployment Topologies

Coherence*Web supports most of the same deployment topologies that Coherence does including in-process, out-of-process (that is, client/server deployment), and bridging clients and servers over Coherence*Extend. The major supported deployment topologies are described in the following sections.

  • In-Process Topology, also known as local storage enabled, is where session data is stored in-process with the application server

  • Out-of-Process Topology, also known as local storage disabled, is where the application servers are configured as cache clients and dedicated JVMs run as cache servers, physically storing and managing the clustered data.

  • Out-of-Process with Coherence*Extend Topology, means communication between the application server tier and the cache server tier are over Coherence*Extend (TCP/IP).

5.5.1 In-Process Topology

The in-process topology is not recommended for production use and is supported mainly for development and testing. By storing the session data in-process with the application server, this topology is very easy to get up and running quickly for smoke tests, developing and testing. In this topology, local storage is enabled (that is, tangosol.coherence.distributed.localstorage=true).

Figure 5-9 illustrates the in-process topology. All of the application servers communicate with the same session data cache.

Figure 5-9 In-Process Deployment Topology

In-Process Deployment Topology
Description of "Figure 5-9 In-Process Deployment Topology"

5.5.2 Out-of-Process Topology

For the out-of-process deployment topology, the application servers (that is, application server tier) are configured as cache clients (that is, tangosol.coherence.distributed.localstorage=false) and there are dedicated JVMs running as cache servers, physically storing and managing the clustered data.

This approach has these benefits:

  • Session data storage is offloaded from the application server tier to the cache server tier. This reduces heap usage, garbage collection times, and so on.

  • The application and cache server tiers can be scaled independently. If more application processing power is needed, just start more application servers. If more session storage capacity is needed, just start more cache servers.

The Out-of-Process topology is the default recommendation of Oracle Coherence due to its flexibility. Figure 5-10 illustrates the out-of-process topology. Each of the servers in the application tier maintain their own near cache. These near caches communicate with the session data cache which runs in a separate cache server tier.

Figure 5-10 Out-of-Process Deployment Topology

Out of Process Deployment Topology
Description of "Figure 5-10 Out-of-Process Deployment Topology"

5.5.2.1 Migrating from In-Process to Out-of-Process Topology

You can easily migrate your application from an in-process to an out of process topology. To do this, you must run a cache server in addition to the application server. Start the cache server in storage-enabled mode and ensure that it references the same session and cache configuration file (default-session-cache-config.xml) that the application server uses. Start the application server in storage-disabled mode. See "Migrating to Out-of-Process Topology" for detailed information.

5.5.3 Out-of-Process with Coherence*Extend Topology

Coherence*Extend consists of two components: an extend client (or proxy) running outside the cluster and an extend proxy service running in the cluster hosted by one or more cache servers. The out-of-process with Coherence*Extend topology is similar to the out-of-process topology except that the communication between the application server tier and the cache server tier is over Coherence*Extend (TCP/IP). For information about configuring this scenario, see "Configuring Coherence*Web with Coherence*Extend". For information about Coherence*Extend, see Oracle Fusion Middleware Developing Remote Clients for Oracle Coherence.

This approach has the same benefits as the out-of-process topology and the ability to divide the deployment of application servers and cache servers into segments. This is ideal in an environment where application servers are on a network that does not support UDP. The cache servers can be set up in a separate dedicated network, with the application servers connecting to the cluster by using TCP.

Figure 5-11 illustrates the out-of-process with Coherence*Extend topology. Near caches in the servers in the application server tier use an extend proxy to communicate with the session data cache in the cache server tier.

Figure 5-11 Out-of-Process with Coherence*Extend Deployment Topology

Out-of-Process with Coherence*Extend Topology
Description of "Figure 5-11 Out-of-Process with Coherence*Extend Deployment Topology"

5.5.4 Configuring Coherence*Web with Coherence*Extend

One of the deployment options for Coherence*Web is to use Coherence*Extend to connect Web container JVMs to the cluster by using TCP/IP. This configuration should be considered if any of the following situations applies:

  • The Web tier JVMs are in a DMZ while the Coherence cluster is behind a firewall.

  • The Web tier is in an environment that does not support UDP.

  • Web tier JVMs experience long or frequent garbage collection (GC) pauses.

  • Web tier JVMs are restarted frequently.

In these deployments, there are three types of participants:

  • Web tier JVMs, which are the Extend clients in this topology. They are not members of the cluster; instead, they connect to a proxy node in the cluster that will issue requests to the cluster on their behalf.

  • Proxy JVMs, which are storage-disabled members (nodes) of the cluster that accept and manage TCP/IP connections from Extend clients. Requests that arrive from clients will be sent into the cluster, and responses will be sent back through the TCP/IP connections.

  • Storage JVMs, which are used to store the actual session data in memory.

To Configure Coherence*Web to Use Coherence*Extend

  1. Configure Coherence*Web to use the Optimistic Locking mode (see "Optimistic Locking").

  2. Configure a cache configuration file for the proxy and storage JVMs (see "Configure the Cache for Proxy and Storage JVMs").

  3. Modify the Web tier cache configuration file to point to one or more of the proxy JVMs (see "Configure the Cache for Web Tier JVMs").

5.5.4.1 Configure the Cache for Proxy and Storage JVMs

The session cache configuration file (WEB-INF/classes/default-session-cache-config.xml) is an example Coherence*Web session cache configuration file that uses Coherence*Extend.

Use this file for the proxy and server JVMs. It contains system property overrides that allow the same file to be used for both proxy and storage JVMs. When used by a proxy JVM, the system properties described in Table 5-2 should be specified.

Note:

If you are writing applications for the WebLogic Server platform and you are using a customized session cache configuration file, then the file must be packaged in a GAR file for deployment. For more information, see "Using a Custom Session Cache Configuration File".

For more information on the packaging requirements for a GAR file, see also "Packaging Coherence Applications for WebLogic Server" in Oracle Fusion Middleware Administering Oracle Coherence and "Creating Coherence Applications for Oracle WebLogic Server" in Developing Oracle Coherence Applications for Oracle WebLogic Server.

Table 5-2 System Property Values for Proxy JVMs

System Property Name Value

tangosol.coherence.session.localstorage

false

tangosol.coherence.session.proxy

true

tangosol.coherence.session.proxy.localhost

The host name or IP address of the NIC to which the proxy will bind.

tangosol.coherence.session.proxy.localport

A unique port number to which the proxy will bind.


When used by a cache server, specify the system properties described in Table 5-3.

Table 5-3 System Property Values for Storage JVMs

System Property Name Value

tangosol.coherence.session.localstorage

true

tangosol.coherence.session.proxy

false


Example 5-4 illustrates the complete session cache configuration file for a storage JVM.

Example 5-4 default-session-cache-config-server.xml File

<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
              xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Server-side cache configuration descriptor for Coherence*Web over     -->
<!-- Coherence*Extend (see default-session-cache-config-web-tier.xml).               -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <caching-scheme-mapping>
    <!--
    The clustered cache used to store Session management data.
    -->
    <cache-mapping>
      <cache-name>session-management</cache-name>

      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store ServletContext attributes.
    -->
    <cache-mapping>
      <cache-name>servletcontext-storage</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>
    <!--
    The clustered cache used to store Session attributes.
    -->
    <cache-mapping>
      <cache-name>session-storage</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store the "overflowing" (split-out due to size)
    Session attributes. Only used for the "Split" model.
    -->
    <cache-mapping>

      <cache-name>session-overflow</cache-name>
      <scheme-name>session-distributed</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store IDs of "recently departed" Sessions.
    -->
    <cache-mapping>
      <cache-name>session-death-certificates</cache-name>
      <scheme-name>session-certificate</scheme-name>

    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <!--
    Distributed caching scheme used by the various Session caches.
    -->
    <distributed-scheme>
      <scheme-name>session-distributed</scheme-name>
      <scheme-ref>session-base</scheme-ref>

      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>unlimited-local</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
    </distributed-scheme>

    <!--
    Distributed caching scheme used by the "recently departed" Session cache.
    -->
    <distributed-scheme>

      <scheme-name>session-certificate</scheme-name>
      <scheme-ref>session-base</scheme-ref>
      <backing-map-scheme>
        <local-scheme>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>4000</high-units>
          <low-units>3000</low-units>

          <expiry-delay>86400</expiry-delay>
        </local-scheme>
      </backing-map-scheme>
    </distributed-scheme>
    <!--
    "Base" Distributed caching scheme that defines common configuration.
    -->
    <distributed-scheme>
      <scheme-name>session-base</scheme-name>

      <service-name>DistributedSessions</service-name>
      <serializer>
        <instance>
          <class-name>com.tangosol.io.DefaultSerializer</class-name>
        </instance>
      </serializer>
      <thread-count>0</thread-count>
      <lease-granularity>member</lease-granularity>
      <local-storage>true</local-storage>

      <partition-count>257</partition-count>
      <backup-count>1</backup-count>
      <backup-storage>
        <type>on-heap</type>
      </backup-storage>
      <backing-map-scheme>
        <local-scheme>

          <scheme-ref>unlimited-local</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <!--
    Proxy scheme that Coherence*Web clients used to connect to the cluster.
    -->
    <proxy-scheme>

      <service-name>SessionProxy</service-name>
      <thread-count>10</thread-count>
      <acceptor-config>
        <serializer>
          <instance>
            <class-name>com.tangosol.io.DefaultSerializer</class-name>
          </instance>
        </serializer>
        <tcp-acceptor>

          <local-address>
            <address system-property="tangosol.coherence.session.proxy.localhost">localhost</address>
            <port system-property="tangosol.coherence.session.proxy.localport">9099</port>
          </local-address>
        </tcp-acceptor>
      </acceptor-config>

      <autostart system-property="tangosol.coherence.session.proxy">false</autostart>
    </proxy-scheme>

    <!--
    Local caching scheme definition used by all caches that do not require an
    eviction policy.
    -->
    <local-scheme>
      <scheme-name>unlimited-local</scheme-name>
      <service-name>LocalSessionCache</service-name>
    </local-scheme>  
  </caching-schemes>

</cache-config>

5.5.4.2 Configure the Cache for Web Tier JVMs

The session cache configuration file illustrated in Example 5-5 is based on the default-session-cache-config.xml file that can be found in the coherence-web.jar file. The example illustrates a Coherence*Web cache configuration file that uses Coherence*Extend. The Web tier JVMs should use this cache configuration file. Follow these steps

To Install the Session Cache Configuration File for the Web Tier:

  1. Extract the default-session-cache-config.xml file from the coherence-web.jar file.

  2. Add proxy JVM host names and IP addresses and ports to the <remote-addresses/> section of the file. In most cases, you should include the host name and IP address, and port of all proxy JVMs for load balancing and failover.

    Note:

    The <remote-addresses> element contains the proxy server(s) to which the Web container will connect. By default, the Web container will pick an address at random (if there is more than one address in the configuration). If the connection between the Web container and the proxy is broken, the container will connect to another proxy in the list.

  3. Rename the file to default-session-cache-config-web-tier.xml.

  4. Place the file in the WEB-INF/classes directory of your Web application. If you used the WebInstaller to install Coherence*Web, replace the existing file that was added by the WebInstaller.

Example 5-5 illustrates the complete client-side session cache configuration file.

Example 5-5 default-session-cache-config-web-tier.xml File

<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
              xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Client-side cache configuration descriptor for Coherence*Web over     -->
<!-- Coherence*Extend (see default-session-cache-config-server.xml).       -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <caching-scheme-mapping>
    <!--
    The clustered cache used to store Session management data.
    -->
    <cache-mapping>
      <cache-name>session-management</cache-name>

      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store ServletContext attributes.
    -->
    <cache-mapping>
      <cache-name>servletcontext-storage</cache-name>
      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store Session attributes.
    -->
    <cache-mapping>
      <cache-name>session-storage</cache-name>
      <scheme-name>session-near</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store the "overflowing" (split-out due to size)
    Session attributes. Only used for the "Split" model.
    -->
    <cache-mapping>

      <cache-name>session-overflow</cache-name>
      <scheme-name>session-remote</scheme-name>
    </cache-mapping>

    <!--
    The clustered cache used to store IDs of "recently departed" Sessions.
    -->
    <cache-mapping>
      <cache-name>session-death-certificates</cache-name>
      <scheme-name>session-remote</scheme-name>

    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <!--
    Near caching scheme used by the Session attribute cache. The front cache
    uses a Local caching scheme and the back cache uses a Remote caching
    scheme.
    -->
    <near-scheme>
      <scheme-name>session-near</scheme-name>
      <front-scheme>
        <local-scheme>

          <scheme-ref>session-front</scheme-ref>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>session-remote</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>

      <invalidation-strategy>present</invalidation-strategy>
    </near-scheme>

    <local-scheme>
      <scheme-name>session-front</scheme-name>
      <eviction-policy>HYBRID</eviction-policy>
      <high-units>1000</high-units>

      <low-units>750</low-units>
    </local-scheme>

    <remote-cache-scheme>
      <scheme-name>session-remote</scheme-name>
      <initiator-config>
        <serializer>
          <instance>
            <class-name>com.tangosol.io.DefaultSerializer</class-name>
          </instance>
        </serializer>
        <tcp-initiator>
          <remote-addresses>
            <!-- 
            The following list of addresses should include the hostname and port
            of all running proxy JVMs. This is for both load balancing and
            failover of requests from the Web tier.
            -->
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>

          </remote-addresses>
        </tcp-initiator>
      </initiator-config>
    </remote-cache-scheme>
  </caching-schemes>
</cache-config>

5.6 Accessing Sessions with Lazy Acquisition

By default, Web applications instrumented with the WebInstaller will always acquire a session whenever a servlet or filter is called. The session is acquired regardless of whether the servlet or filter actually needs a session. This can be expensive in terms of time and processing power if you run many servlets or filters that do not require a session.

To avoid this behavior, enable lazy acquisition by setting the coherence-session-lazy-access context parameter to true in the web.xml file. The session will be acquired only when the servlet or filter attempts to access it.

5.7 Overriding the Distribution of HTTP Sessions and Attributes

The Coherence*Web Session Distribution Controller, described by the HttpSessionCollection.SessionDistributionController interface, enables you to override the default distribution of HTTP sessions and attributes in a Web application. You override the default distribution by setting the coherence-distributioncontroller-class context parameter (see "Registering a Session Distribution Controller Implementation"). The value of the context parameter indicates an implementation of the SessionDistributionController interface.

An implementation of the SessionDistributionController interface can identify sessions or attributes in any of the following ways:

  • Distributed, where a distributed session or attribute is stored within the Coherence data grid, and thus, accessible to other server JVMs. All sessions (and their attributes) are managed in a distributed manner. This is the default behavior and is provided by the com.tangosol.coherence.servlet.AbstractHttpSessionCollection$DistributedController implementation of the SessionDistributionController interface.

  • Local, where a local session or attribute is stored on the originating server's heap, and thus, only accessible by that server. The com.tangosol.coherence.servlet.AbstractHttpSessionCollection$LocalController class provides this behavior. This option is not recommended for production purposes, but it can be useful for testing the difference in scalable performance between local-only and fully distributed implementations.

  • Hybrid, which is similar to distributed in that all sessions and serializable attributes are managed in a distributed manner. However, unlike distributed, session attributes that do not implement the Serializable interface will be kept local. The com.tangosol.coherence.servlet.AbstractHttpSessionCollection$HybridController class provides this behavior.

At any point during the life of a session, the session or attributes for that session can change from local or distributed. However, when a session or attribute is distributed it cannot change back to local.

You can use the Session Distribution Controller in any of the following ways:

  • You can allow new sessions to remain local until you add an attribute (for example, when you add the first item to an online shopping cart); the idea is that a session must be fault-tolerant only when it contains valuable data.

  • Some Web frameworks use session attributes to store the UI rendering state. Often, this data cannot be distributed because it is not serializable. Using the Session Distribution Controller, these attributes can be kept local while allowing the rest of the session attributes to be distributed.

  • The Session Distribution Controller can assist in the conversion from nondistributed to distributed systems, especially when the cost of distributing all sessions and all attributes is a consideration.

5.7.1 Implementing a Session Distribution Controller

Example 5-6 illustrates a sample implementation of the HttpSessionCollection.SessionDistributionController interface. In the sample, sessions are tested to see if they have a shopping cart attached (only these sessions will be distributed). Next, the session is tested whether it contains a certain attribute. If the attribute is found, then it is not distributed.

Example 5-6 Sample Session Distribution Controller Implementation

import com.tangosol.coherence.servlet.HttpSessionCollection;
import com.tangosol.coherence.servlet.HttpSessionModel;
 
/**
* Sample implementation of SessionDistributionController
*/
public class CustomSessionDistributionController
        implements HttpSessionCollection.SessionDistributionController
    {
    public void init(HttpSessionCollection collection)
        {
        }
 
    /**
    * Only distribute sessions that have a shopping cart.
    *
    * @param model Coherence representation of the HTTP session
    *
    * @return true if the session should be distributed
    */
    public boolean isSessionDistributed(HttpSessionModel model)
        {
        return model.getAttribute("shopping-cart") != null;
        }
 
    /**
    * If a session is "distributed", then distribute all attributes with the 
    * exception of the "ui-rendering" attribute.
    *
    * @param model Coherence representation of the HTTP session
    * @param sName name of the attribute to check
    *
    * @return true if the attribute should be distributed
    */
    public boolean isSessionAttributeDistributed(HttpSessionModel model,
            String sName)
        {
        return !"ui-rendering".equals(sName);
        }
    } 

5.7.2 Registering a Session Distribution Controller Implementation

After you have written your SessionDistributionController implementation, you can register it with your application by using the coherence-distributioncontroller-class context parameter. See Appendix A, "Coherence*Web Context Parameters" for more information about this parameter.

5.8 Detecting Changed Attribute Values

By default, Coherence*Web tracks if attributes retrieved from the session have changed during the course of processing a request. This is done by caching the initial serialized binary form of the attribute when it is retrieved from the session. At the end of processing a request, Coherence*Web will compare the current binary value of the attribute with the "old" version of the binary. If the values do not match, then the current value is written to the cache.If you know that your application does not mutate session attributes without doing a corresponding set, then you should set the coherence-enable-suspect-attributes context parameter to false. This will improve memory use and near-cache optimization.

5.9 Saving Non-Serializable Attributes Locally

By default, Coherence*Web attempts to serialize all session attributes. If you are working with any session attributes that are not serializable, you can store them locally by setting the coherence-preserve-attributes parameter to true. This parameter requires you to use a load balancer to retrieve non-serializable attributes for a session.

Note that if the client (application server) fails, then the attributes will be lost. Your application must be able to recover from this.

The default for this parameter is false. If you are using ActiveCache for GlassFish, then this value will be set to true because the GlassFish Server requires local sessions to be available.

See Appendix A, "Coherence*Web Context Parameters" for more information about the coherence-preserve-attributes parameter.

5.10 Securing Coherence*Web Deployments

To prevent unauthorized Coherence TCMP cluster members from accessing HTTP session cache servers, Coherence provides a Secure Socket Layer (SSL) implementation. This implementation can be used to secure TCMP communication between cluster nodes and TCP communication between Coherence*Extend clients and proxies. Coherence allows you to use the Transport Layer Security (TLS) 1.0 protocol which is the next version of the SSL 3.0 protocol; however, the term SSL is used since it is the more widely recognized term.

This section provides only an overview of using SSL in a Coherence environment. For more information and sample configurations, see "Using SSL to Secure Communication" in Oracle Fusion Middleware Securing Oracle Coherence.

Using SSL to Secure TCMP Communications

A Coherence cluster can be configured to use SSL with TCMP. Coherence allows you to use both one-way and two-way authentication. Two-Way authentication is typically used more often than one-way authentication, which has fewer use cases in a cluster environment. In addition, it is important to realize that TCMP is a peer-to-peer protocol that generally runs in trusted environments where many cluster nodes are expected to remain connected with each other. The implications of SSL on administration and performance should be carefully considered.

In this configuration, you can use the pre-defined, out-of-the-box SSL socket provider that allows for two-way communication SSL connections based on peer trust, or you can define your own SSL socket provider.

Using SSL to Secure Extend Client Communication

Communication between extend clients and extend proxies can be secured using SSL. SSL requires configuration on both the client side as well as the cluster side. On the cluster side, you configure SSL in the cluster-side cache configuration file by defining a SSL socket provider for a proxy service. You can define the SSL socket provider either for all proxy services or for individual proxy services.

On the client side, you configure SSL in the client-side cache configuration file by defining a SSL socket provider for a remote cache scheme and, if required, for a remote invocation scheme. Like the cluster side, you can define the SSL socket provider either for all remote services or for individual remote services.

5.11 Customizing the Name of the Session Cache Configuration File

By default, Coherence*Web uses the information in the default-session-cache-config.xml file to configure the session caches in Coherence*Web. You can direct Coherence*Web to use a different file by specifying the coherence-cache-configuration-path context parameter in the web.xml file, for example:

...
<context-param>
   <param-name>coherence-cache-configuration-path</param-name>
   <param-value>my-default-session-cache-config-name.xml</param-value>
</context-param>
...

5.12 Configuring Logging for Coherence*Web

Coherence*Web uses the logging framework provided by Coherence. Coherence has its own logging framework and also supports the use of log4j, slf4j, and Java logging to provide a common logging environment for an application. Logging in Coherence occurs on a dedicated and low-priority thread to reduce the impact of logging on the critical portions of the system. Logging is pre-configured and the default settings should be changed as required. For more information, see "Configuring Logging" in Oracle Fusion Middleware Developing Applications with Oracle Coherence.

The Coherence*Web logging level can also be set using the context parameter/system property coherence-session-logger-level. This is an alternative way to set the logging level for Coherence*Web (as opposed to using JDK logging). See Appendix A, "Coherence*Web Context Parameters" for more information on this parameter.

WARNING:

Applications that use the JDK logging framework can configure Coherence to use JDK logging as well. Note, however, that setting the log level to FINEST can expose session IDs in the log file.

5.13 Getting Concurrent Access to the Same Session Instance

This feature was added as part of a patch release. For information about which features were added in a patch, see Release Notes for Oracle Coherence.

A cache delegator class is a class that is responsible for manipulating (getting, putting, or deleting) any data in the distributed cache. Use the <coherence-cache-delegator-class> context parameter in the web.xml file to specify the name of the class responsible for the data manipulation.

One of the possible values for the context parameter is the com.tangosol.coherence.servlet.LocalSessionCacheDelegator class. This class indicates that the local cache should be used for storing and retrieving the session instance before attempting to use the distributed cache. This delegator is useful for applications that require concurrent access to the same session instance.

Note:

This feature must be enabled when working with PeopleSoft applications.

To enable the LocalSessionCacheDelegator cache delegator, the following items must be configured in web.xml:

  • The coherence-cache-delegator-class context parameter with the value set to com.tangosol.coherence.servlet.LocalSessionCacheDelegator.

  • The coherence-preserve-attributes context parameter set to true to allow nonserializable objects to be stored in the session object.

  • The coherence-distributioncontroller-class context parameter with the value set to com.tangosol.coherence.servlet.AbstractHttpSessionCollection$HybridController. This value forces all sessions and serializable attributes to be managed in a distributed manner. All session attributes that do not implement the Serializable interface will be kept local. Note that the use of this context parameter also requires coherence-sticky-sessions optimization to be enabled.

Example 5-7 illustrates a sample configuration for the cache delegator in the web.xml file.

Example 5-7 Configuring Cache Delegator in the web.xml File

...
 <context-param>
    <param-name>coherence-cache-delegator-class</param-name>
    <param-value>com.tangosol.coherence.servlet.LocalSessionCacheDelegator
</param-value>
 </context-param>
 <context-param>
    <param-name>coherence-preserve-attributes</param-name>
    <param-value>true</param-value>
 </context-param>
 <context-param>
    <param-name>coherence-distributioncontroller-class</param-name>
    <param-value>com.tangosol.coherence.servlet.AbstractHttpSessionCollection$HybridController</param-value>
 </context-param>
...

Also, when using LocalSessionCacheDelegator as the cache delegator, you should not configure a near cache in the session-cache-config.xml file. This is because local session instances are used. Appendix D, "Session Cache Configuration File Without a Near Cache" illustrates a sample session-cache-config.xml file that omits a near cache configuration.