|Oracle® Coherence Developer's Guide
Part Number E22837-01
This chapter provides a quick overview of general Coherence concepts and features. It outlines product capabilities, usage possibilities, and provides a brief overview of how one would go about implementing particular features. The items discussed in this chapter are detailed throughout this guide.
The following sections are included in this chapter:
The topics in this section describes fundamental concepts that are associated with Coherence and discusses several important features that are associated with using Coherence to cluster data.
At the core of Coherence is the concept of clustered data management. This implies the following goals:
A fully coherent, single system image (SSI)
Scalability for both read and write access
Fast, transparent failover and failback
Linear scalability for storage and processing
No Single-Points-of-Failure (SPOFs)
Cluster-wide locking and transactions
Built on top of this foundation are the various services that Coherence provides, including database caching, HTTP session management, grid agent invocation and distributed queries. Before going into detail about these features, some basic aspects of Coherence should be discussed.
Coherence supports many topologies for clustered data management. Each of these topologies has a trade-off in terms of performance and fault-tolerance. By using a single API, the choice of topology can be deferred until deployment if desired. This allows developers to work with a consistent logical view of Coherence, while providing flexibility during tuning or as application needs change.
Coherence provides several cache implementations:
Local Cache—Local on-heap caching for non-clustered caching.
Replicated Cache—Perfect for small, read-heavy caches.
Distributed Cache—True linear scalability for both read and write access. Data is automatically, dynamically and transparently partitioned across nodes. The distribution algorithm minimizes network traffic and avoids service pauses by incrementally shifting data.
Near Cache—Provides the performance of local caching with the scalability of distributed caching. Several different near-cache strategies are available and offer a trade-off between performance and synchronization guarantees.
In-process caching provides the highest level of raw performance, since objects are managed within the local JVM. This benefit is most directly realized by the Local, Replicated, Optimistic and Near Cache implementations.
Out-of-process (client/server) caching provides the option of using dedicated cache servers. This can be helpful when you want to partition workloads (to avoid stressing the application servers). This is accomplished by using the Partitioned cache implementation and simply disabling local storage on client nodes through a single command-line option or a one-line entry in the XML configuration.
Tiered caching (using the Near Cache functionality) enables you to couple local caches on the application server with larger, partitioned caches on the cache servers, combining the raw performance of local caching with the scalability of partitioned caching. This is useful for both dedicated cache servers and co-located caching (cache partitions stored within the application server JVMs).
See Part III, "Using Caches" for detailed information on configuring and using caches.
While most customers use on-heap storage combined with dedicated cache servers, Coherence has several options for data storage:
On-heap—The fastest option, though it can affect JVM garbage collection times.
NIO RAM—No impact on garbage collection, though it does require serialization/deserialization.
NIO Disk—Similar to NIO RAM, but using memory-mapped files.
File-based—Uses a special disk-optimized storage system to optimize speed and minimize I/O.
Coherence storage is transient: the disk-based storage options are for managing cached data only. For persistent storage, Coherence offers backing maps coupled with a CacheLoader/CacheStore.
See Chapter 13, "Implementing Storage and Backing Maps," for detailed information.
Because serialization is often the most expensive part of clustered data management, Coherence provides the following options for serializing/deserializing data:
com.tangosol.io.pof.PofSerializer – The Portable Object Format (also referred to as POF) is a language agnostic binary format. POF was designed to be incredibly efficient in both space and time and is the recommended serialization option in Coherence. See Chapter 19, "Using Portable Object Format."
java.io.Serializable – The simplest, but slowest option.
java.io.Externalizable – This requires developers to implement serialization manually, but can provide significant performance benefits. Compared to
java.io.Serializable, this can cut serialized data size by a factor of two or more (especially helpful with Distributed caches, as they generally cache data in serialized form). Most importantly, CPU usage is dramatically reduced.
com.tangosol.io.ExternalizableLite – This is very similar to
java.io.Externalizable, but offers better performance and less memory usage by using a more efficient IO stream implementation.
com.tangosol.run.xml.XmlBean – A default implementation of
Coherence's API provides access to all Coherence functionality. The most commonly used subset of this API is exposed through simple XML options to minimize effort for typical use cases. There is no penalty for mixing direct configuration through the API with the easier XML configuration.
Coherence is designed to allow the replacement of its modules as needed. For example, the local "backing maps" (which provide the actual physical data storage on each node) can be easily replaced as needed. The vast majority of the time, this is not required, but it is there for the situations that require it. The general guideline is that 80% of tasks are easy, and the remaining 20% of tasks (the special cases) require a little more effort, but certainly can be done without significant hardship.
Coherence is organized as set of services. At the root is the Cluster service. A cluster is defined as a set of Coherence instances (one instance per JVM, with one or more JVMs on each computer). A cluster is defined by the combination of multicast address and port. A TTL (network packet time-to-live; that is, the number of network hops) setting can restrict the cluster to a single computer, or the computers attached to a single switch.
Under the cluster service are the various services that comprise the Coherence API. These include the various caching services (Replicated, Distributed, and so on) and the Invocation Service (for deploying agents to various nodes of the cluster). Each instance of a service is named, and there is typically a default service instance for each type.
The cache services contain named caches (
com.tangosol.net.NamedCache), which are analogous to database tables—that is, they typically contain a set of related objects.
See Chapter 6, "Introduction to Coherence Clusters," for more information on the cluster service as well the other cluster-based service provided by Coherence.
This section provides an overview of the
NamedCache API, which is the primary interface used by applications to get and interact with cache instances. This section also includes some insight into the use of the
The following source code returns a reference to a
NamedCache instance. The underlying cache service is started if necessary. See the Oracle Coherence Java API Reference for details on the
import com.tangosol.net.*; ... NamedCache cache = CacheFactory.getCache("MyCache");
Coherence scans the cache configuration XML file for a name mapping for
MyCache. This is similar to Servlet name mapping in a web container's
web.xml file. Coherence's cache configuration file contains (in the simplest case) a set of mappings (from cache name to cache scheme) and a set of cache schemes.
By default, Coherence uses the
coherence-cache-config.xml file found at the root of
coherence.jar. This can be overridden on the JVM command-line with
-Dtangosol.coherence.cacheconfig=file.xml. This argument can reference either a file system path, or a Java resource path.
com.tangosol.net.NamedCache interface extends several other interfaces:
Map methods such as
com.tangosol.util.ObservableMap—methods for listening to cache events. (See Chapter 21, "Using Cache Events".
com.tangosol.net.cache.CacheMap—methods for getting a collection of keys (as a
Map) that are in the cache and for putting objects in the cache. Also supports adding an expiry value when putting an entry in a cache.
com.tangosol.util.ConcurrentMap—methods for concurrent access such as
com.tangosol.util.InvocableMap—methods for server-side processing of cache data.
Cache keys and values must be serializable (for example,
java.io.Serializable). Furthermore, cache keys must provide an implementation of the
equals() methods, and those methods must return consistent results across cluster nodes. This implies that the implementation of
equals() must be based solely on the object's serializable state (that is, the object's non-transient fields); most built-in Java types, such as
Date, meet this requirement. Some cache implementations (specifically the partitioned cache) use the serialized form of the key objects for equality testing, which means that keys for which
equals() returns true must serialize identically; most built-in Java types meet this requirement as well.
There are two general approaches to using a
As a clustered implementation of
java.util.Map with several added features (queries, concurrency), but with no persistent backing (a "side" cache).
As a means of decoupling access to external data sources (an "inline" cache). In this case, the application uses the
NamedCache interface, and the
NamedCache takes care of managing the underlying database (or other resource).
Typically, an inline cache is used to cache data from:
a database—The most intuitive use of a cache—simply caching database tables (in the form of Java objects).
a service—Mainframe, web service, service bureau—any service that represents an expensive resource to access (either due to computational cost or actual access fees).
calculations—Financial calculations, aggregations, data transformations. Using an inline cache makes it very easy to avoid duplicating calculations. If the calculation is complete, the result is simply pulled from the cache. Since any serializable object can be used as a cache key, it is a simple matter to use an object containing calculation parameters as the cache key.
See Chapter 14, "Caching Data Sources" for more information on inline caching.
write-through—Ensures that the external data source always contains up-to-date information. Used when data must be persisted immediately, or when sharing a data source with other applications.
write-behind—Provides better performance by caching writes to the external data source. Not only can writes be buffered to even out the load on the data source, but multiple writes can be combined, further reducing I/O. The trade-off is that data is not immediately persisted to disk; however, it is immediately distributed across the cluster, so the data survives the loss of a server. Furthermore, if the entire data set is cached, this option means that the application can survive a complete failure of the data source temporarily as both cache reads and writes do not require synchronous access the data source.
To implement a read-only inline cache, you simply implement two methods on the
com.tangosol.net.cache.CacheLoader interface, one for singleton reads, the other for bulk reads. Coherence provides an abstract class
com.tangosol.net.cache.AbstractCacheLoader which provides a default implementation of the bulk method, which means that you need only implement a single method:
public Object load(Object oKey). This method accepts an arbitrary cache key and returns the appropriate value object.
If you want to implement read/write caching, you must extend
com.tangosol.net.cache.AbstractCacheStore (or implement the interface
com.tangosol.net.cache.CacheStore), which adds the following methods:
public void erase(Object oKey); public void eraseAll(Collection colKeys); public void store(Object oKey, Object oValue); public void storeAll(Map mapEntries);
erase() should remove the specified key from the external data source. The method
store() should update the specified item in the data source if it exists, or insert it if it does not presently exist.
CacheStore is implemented, it can be connected through the
Coherence provides the ability to query cached data. With partitioned caches, the queries are indexed and parallel, which means that adding servers to a partitioned cache not only increases throughput (total queries per second) but also reduces latency, with queries taking less user time. To query against a NamedCache, all objects should implement a common interface (or base class). Any field of an object can be queried; indexes are optional, and used to increase performance. With a replicated cache, queries are performed locally, and do not use indexes. See Chapter 22, "Querying Data In a Cache," for detailed information.
To add an index to a
NamedCache, you first need a value extractor (which accepts as input a value object and returns an attribute of that object). Indexes can be added blindly (duplicate indexes are ignored). Indexes can be added at any time, before or after inserting data into the cache.
It should be noted that queries apply only to cached data. For this reason, queries should not be used unless the entire data set has been loaded into the cache, unless additional support is added to manage partially loaded sets.
Developers have the option of implementing additional custom filters for queries, thus taking advantage of query parallel behavior. For particularly performance-sensitive queries, developers may implement index-aware filters, which can access Coherence's internal indexing structures.
Coherence includes a built-in optimizer, and applies indexes in the optimal order. Because of the focused nature of the queries, the optimizer is both effective and efficient. No maintenance is required.
Coherence provides various transaction options. The options include: basic data concurrency using the
ConcurrentMap interface and
EntryProcessor API, atomic transactions using the Transaction Framework API, and atomic transactions with full XA support using the Coherence resource adapter. See Chapter 27, "Performing Transactions" for detailed instructions.
Coherence*Web is an HTTP session-management module with support for a wide range of application servers. See Oracle Coherence User's Guide for Oracle Coherence*Web for more information on Coherence*Web.
Using Coherence session management does not require any changes to the application. Coherence*Web uses the NearCache technology to provide fully fault-tolerant caching, with almost unlimited scalability (to several hundred cluster nodes without issue).
The Coherence invocation service can deploy computational agents to various nodes within the cluster. These agents can be either execute-style (deploy and asynchronously listen) or query-style (deploy and synchronously listen). See Chapter 24, "Processing Data In a Cache," for more information on using the invocation service.
The invocation service is accessed through the
com.tangosol.net.InvocationService interface and includes the following two methods:
public void execute(Invocable task, Set setMembers, InvocationObserver observer); public Map query(Invocable task, Set setMembers);
An instance of the service can be retrieved from the
Coherence implements the
WorkManager API for task-centric processing.
NamedCache instances in Coherence implement the
com.tangosol.util.ObservableMap interface, which allows the option of attaching a cache listener implementation (of
com.tangosol.util.MapListener). It should be noted that applications can observe events as logical concepts regardless of which computer caused the event. Customizable server-based filters and lightweight events can minimize network traffic and processing. Cache listeners follow the JavaBean paradigm, and can distinguish between system cache events (for example, eviction) and application cache events (for example, get/put operations).
Continuous Query functionality provides the ability to maintain a client-side "materialized view". Similarly, any service can be watched for members joining and leaving, including the cluster service and the cache and invocation services.
See Chapter 21, "Using Cache Events," for more detailed information on using events.
Most ORM products support Coherence as an "L2" caching plug-in. These solutions cache entity data inside Coherence, allowing application on multiple servers to share cached data. See Oracle Coherence Integration Guide for Oracle Coherence for more information.
Coherence provides support for cross-platform clients (over TCP/IP). All clients use the same wire protocol (the servers do not differentiate between client platforms). Also, note that there are no third-party components in any of these clients (such as embedded JVMs or language bridges). The wire protocol supports event feeds and coherent in-process caching for all client platforms. See Oracle Coherence Client Guide for complete instructions on using Coherence*Extend to support remote C++ and .NET clients.
Coherence offers management and monitoring facilities by using Java Management Extensions (JMX). See Oracle Coherence Management Guide for detailed information on using JMX with Coherence.