Oracle® Coherence Release Notes for Oracle Coherence Release 3.5.1 Part Number E15433-01 |
|
|
View PDF |
This chapter describes changes, enhancements, and corrections made to the Oracle Coherence documentation library for 3.5.1. The library can be found at the following URL:
http://download.oracle.com/docs/cd/E14526_01/index.htm
Web applications that use different sticky optimization and locking settings should not be intermixed within the same cluster. With that in mind, the following note has been added to the Session Models section of the Coherence*Web Session Management Features chapter in the User's Guide for Oracle Coherence*Web.
Note:
In general, Web applications that are part of the same Coherence cluster must use the same session model type. Inconsistent configurations may result in deserialization errors.The following note has been added to the Session Locking Modes section of the Coherence*Web Session Management Features chapter in the User's Guide for Oracle Coherence*Web.
Note:
In general, Web applications that are part of the same Coherence cluster must use the same locking mode and sticky session optimizations setting. Inconsistent configurations may result in deadlock.This section describes the changes made to the Read-Through, Write-Through, Write-Behind, and Refresh-Ahead Caching chapter in Getting Started with Oracle Coherence Corrected text appears in italics.
Table 2-1 illustrates the changes made to the Write-Behind Caching section.
Table 2-1 Changes Made to the Write-Behind Caching Description
In Managing Map Operations with Triggers in the Developer's Guide for Oracle Coherence: the introduction to Example 4–1 incorrectly states that the createMapTrigger
method would return a new MapTriggerListener(new MyCustomTrigger());
.
Instead of createMapTrigger
, the name of the correct method should be createTriggerListener
.
The pre-3.5.1 cache-config.dtd
was not well-formed in that it was missing a comma (in line 445) between the thread-count?
and task-hung-threshold?
attributes in the proxy-scheme
element definition.
... <!ELEMENT proxy-scheme (scheme-name?, scheme-ref?, service-name?, thread-count? task-hung-threshold?, task-timeout?, request-timeout?, acceptor-config?, proxy-config?, autostart?)> ...
This has been fixed for the 3.5.1 release.
The following addition was made to the description of the address-provider
subelement of well-known-addresses in the Developer's Guide for Oracle Coherence:
The calling component will attempt to obtain the full list upon node startup, the provider must return a terminating null address to indicate that all available addresses have been returned.
The following addition was made to the description of backing-map-scheme
subelement of the distributed-scheme
element in the Developer's Guide for Oracle Coherence. Added text is in italics.
Table 2-2 Changes Made to the backing-map-scheme Description
Old Text | New Text |
---|---|
Note that when using an overflow-based backing map it is important that the corresponding |
When using an off-heap backing map it is important that the corresponding |
This section discusses JVM sizing considerations for Coherence cluster JVMs. The primary issue to consider when sizing your JVMs is achieving a balance of available RAM versus garbage collection (GC) pause times.
GC Pauses
Lengthy GC pause times can negatively impact the Coherence cluster as they are, for the most part, indistinguishable from node death. For this reason, it is very important that cluster nodes are sized and/or tuned to ensure that their GC times remain minimal. As a good rule of thumb, a node should spend less than 10% of its time paused in GC, normal GC times should be under 100ms, and maximum GC times should be around 1 second.
You can monitor GC activity in several ways; some standard mechanisms include:
JVM switch -Xloggc:
(similar to verbose
gc
but includes timestamps)
Over JMX using tools such as jConsole
If you are looking to just get things up and running with minimal effort, the following recommendations should suffice.
The standard safe recommendation for Coherence cache servers is to run fixed-size heap of up to 1GB. Additionally, it is recommended to use an incremental garbage collector to minimize GC pause durations, and to also run the JVM in "server" mode to encourage optimizations for long running processes.
For example, the following command allows for good performance without the need for more elaborate JVM tuning:
java -server -Xms1g -Xmx1g -Xincgc -Xloggc: -cp coherence.jar com.tangosol.net.DefaultCacheServer
Coherence TCMP clients should be configured similarly to cache servers as long GCs could cause them to be misidentified as being dead.
Coherence Extend clients are not, technically speaking, cluster members, and as such the effect of long GCs is less detrimental. For extend clients, it is recommended that you follow the existing guidelines as set forth by the application in which you are embedding Coherence.
There is a related question of how much data you can store within a cache server of a given size. The basic recommendation is to use up to one-third of the heap for primary cache storage. This leaves another one-third for backup storage, and the final one-third for "scratch space". Scratch space is then used for things such as holding classes, temporary objects, network transfer buffers, and GC compaction. You may instruct Coherence to limit primary storage on a per-cache basis by using the <high-units>
element, and specifying BINARY
in the <unit-calculator>
element. These settings are automatically applied to backup storage as well.
Ideally, both the primary and backup storage will also fit within the JVMs tenured space (for HotSpot based JVMs). See HotSpot's Tuning Garbage Collection guide for details on sizing the collectors generations.
See the Developer's Guide for Oracle Coherence for more information on the <high-units>
and <unit-calculator>
elements
It is possible to run cache servers with larger heap sizes, although it becomes more important to monitor and tune the JVMs to minimize the GC pauses. It may also be necessary to alter the storage ratios such that the amount of scratch space is increased to facilitate faster GC compactions. Additionally it is recommended that you make use of an up to date JVM version such as HotSpot 1.6 as it includes significant improvements for managing large heaps.
Running multiple identical cache server instances on a single machine enables you to use the available system memory. It is important to not overcommit the available resources. Namely if you have a machine with 16GB of RAM, it is not reasonable to attempt to dedicate 16GB of memory to your JVMs. Ultimately when all the machines processes are running you want to be in a state that swap space is not actively being used. In selecting the size and number of JVMs to run, it is important to realize that the JVM process will use more memory than is specified when configuring the heap size. The heap size settings specify the amount of heap which the JVM makes available to the application, but the JVM itself will also consume additional memory. The amount consumed will differ depending on the OS, and JVM settings. For instance a HotSpot JVM running on Linux configured with a 1GB heap will consume roughly 1.2GB of RAM. It is important that you externally measure the JVMs memory utilization to ensure that you don't over commit your RAM. Tools such as 'top', 'vmstat', and 'Task Manager' are useful in identifying how much RAM is actually being used.
The following most frequently seen messages have been documented for the 3.5.1 release.
The following partitioned cache service messages have been documented for the 3.5.1 release.
https://metalink.oracle.com/CSP/main/article?cmd=show&type=NOT&id=845363.1
https://metalink.oracle.com/CSP/main/article?cmd=show&type=NOT&id=845363.1
https://metalink.oracle.com/CSP/main/article?cmd=show&type=NOT&id=845363.1
The following TCMP log messages have been documented for the 3.5.1 release.