This chapter provides instructions for using Coherence's transaction and data concurrency features. Users should be familiar with transaction principles before reading this chapter. In addition, the Coherence Resource Adapter requires knowledge of J2EE Connector Architecture (J2CA), Java Transaction API (JTA) and Java EE deployment.
The following sections are included in this chapter:
Transactions ensure correct outcomes in systems that undergo state changes by allowing a programmer to scope multiple state changes into a unit of work. The state changes are committed only if each change can complete without failure; otherwise, all changes must be rolled back to their previous state.
Transactions attempt to maintain a set of criteria that are commonly referred to as ACID properties (Atomicity, Consistency, Isolation, Durability):
Atomic - The changes that are performed within the transaction are either all committed or all rolled back to their previous state.
Consistent - The results of a transaction must leave any shared resources in a valid state.
Isolated - The results of a transaction are not visible outside of the transaction until the transaction has been committed.
Durable - The changes that are performed within the transaction are made permanent.
Sometimes ACID properties cannot be maintained solely by the transaction infrastructure and may require customized business logic. For instance, the consistency property requires program logic to check whether changes to a system are valid. In addition, strict adherence to the ACID properties can directly affect infrastructure and application performance and must be carefully considered.
Coherence offers various transaction options that provide different transaction guarantees. The options should be selected based on an application's or solution's transaction requirements.
Table 27-1 summarizes the various transactions option that Coherence offers.
Table 27-1 Coherence Transaction Options
Option Name | Description |
---|---|
Explicit locking |
The |
Entry Processors |
Coherence also supports a lock-free programming model through the |
Transaction Framework API |
Coherence Transaction Framework API is a connection-based API that provides atomic transaction guarantees across partitions and caches even with a client failure. The framework supports the use of |
Coherence Resource Adapter |
The Coherence resource adapter leverages the Coherence Transaction Framework API and allows Coherence to participate as a resource in XA transactions that are managed by a JavaEE container's transaction manager. This transaction option offers atomic guarantees. For detailed information on this option, see "Using the Coherence Resource Adapter". |
The standard NamedCache
interface extends the ConcurrentMap
interface which includes basic locking methods. Locking operations are applied at the entry level by requesting a lock against a specific key in a NamedCache
:
Example 27-1 Applying Locking Operations on a Cache
... NamedCache cache = CacheFactory.getCache("dist-cache"); Object key = "example_key"; cache.lock(key, -1); try { Object value = cache.get(key); // application logic cache.put(key, value); } finally { // Always unlock in a "finally" block // to ensure that uncaught exceptions // do not leave data locked cache.unlock(key); } ...
Coherence lock functionality is similar to the Java synchronized
keyword and the C# lock
keyword: locks only block locks. Threads must cooperatively coordinate access to data through appropriate use of locking. If a thread has locked the key to an item, another thread can read the item without locking.
Locks are unaffected by server failure and failover to a backup server. Locks are immediately released when the lock owner (client) fails.
Locking behavior varies depending on the timeout requested and the type of cache. A timeout of -1 blocks indefinitely until a lock can be obtained, 0 returns immediately, and a value greater than 0 waits the specified number of milliseconds before timing out. The boolean return value should be examined to ensure the caller has actually obtained the lock. See ConcurrentMap.lock() for more details. Note that if a timeout value is not passed to lock()
the default is 0. With replicated caches, the entire cache can be locked by using ConcurrentMap.LOCK_ALL
as the key, although this is usually not recommended. This operation is not supported with partitioned caches.
In both replicated and partitioned caches, gets are permitted on keys that are locked. In a replicated cache, puts are blocked, but they are not blocked in a partitioned cache. When a lock is in place, it is the responsibility of the caller (either in the same thread or the same cluster node, depending on the lease-granularity
configuration) to release the lock. This is why locks should always be released with a finally clause (or equivalent). If this is not done, unhandled exceptions may leave locks in place indefinitely. For more information on lease-granularity
configuration, see "DistributedCache Service Parameters".
The InvocableMap
superinterface of NamedCache
allows for concurrent lock-free execution of processing code within a cache. This processing is performed by an EntryProcessor
. In exchange for reduced flexibility compared to the more general ConcurrentMap
explicit locking API, EntryProcessors
provide the highest levels of efficiency without compromising data reliability.
Since EntryProcessors
perform an implicit low-level lock on the entries they are processing, the end user can place processing code in an EntryProcessor
without having to worry about concurrency control. Note that this is different than the explicit lock(key)
functionality provided by ConcurrentMap
API.
In a replicated cache or a partitioned cache running under Caching Edition, execution happens locally on the initiating client. In partitioned caches running under Enterprise Edition or greater, the execution occurs on the node that is responsible for primary storage of the data.
InvocableMap
provides three methods of starting EntryProcessors:
Invoke an EntryProcessor
on a specific key. Note that the key need not exist in the cache to invoke an EntryProcessor
on it.
Invoke an EntryProcessor
on a collection of keys.
Invoke an EntryProcessor
on a Filter
. In this case, the Filter
is executed against the cache entries. Each entry that matches the Filter
criteria has the EntryProcessor
executed against it. For more information on Filters, see Chapter 22, "Querying Data In a Cache".
In partitioned caches running under Enterprise Edition or greater, entry processors are executed in parallel across the cluster (on the nodes that own the individual entries.) This provides a significant advantage over having a client lock all affected keys, pull all required data from the cache, process the data, place the data back in the cache, and unlock the keys. The processing occurs in parallel across multiple computers (as opposed to serially on one computer) and the network overhead of obtaining and releasing locks is eliminated.
Note:
EntryProcessor classes must be available in the classpath for each cluster node.Here is a sample of high-level concurrency control. Code that requires network access is commented:
Example 27-2 Concurrency Control without Using EntryProcessors
final NamedCache cache = CacheFactory.getCache("dist-test"); final String key = "key"; cache.put(key, new Integer(1)); // begin processing // *requires network access* if (cache.lock(key, 0)) { try { // *requires network access* Integer i = (Integer) cache.get(key); // *requires network access* cache.put(key, new Integer(i.intValue() + 1)); } finally { // *requires network access* cache.unlock(key); } } // end processing
The following is an equivalent technique using an Entry Processor. Again, network access is commented:
Example 27-3 Concurrency Control Using EntryProcessors
final NamedCache cache = CacheFactory.getCache("dist-test"); final String key = "key"; cache.put(key, new Integer(1)); // begin processing // *requires network access* cache.invoke(key, new MyCounterProcessor()); // end processing ... public static class MyCounterProcessor extends AbstractProcessor { // this is executed on the node that owns the data, // no network access required public Object process(InvocableMap.Entry entry) { Integer i = (Integer) entry.getValue(); entry.setValue(new Integer(i.intValue() + 1)); return null; } }
EntryProcessors
are individually executed atomically, however multiple EntryProcessor
invocations by using InvocableMap.invokeAll()
do not execute as one atomic unit. As soon as an individual EntryProcessor
has completed, any updates made to the cache is immediately visible while the other EntryProcessors
are executing. Furthermore, an uncaught exception in an EntryProcessor
does not prevent the others from executing. Should the primary node for an entry fail while executing an EntryProcessor
, the backup node performs the execution instead. However if the node fails after the completion of an EntryProcessor
, the EntryProcessor
is not invoked on the backup.
Note that in general, EntryProcessors
should be short lived. Applications with longer running EntryProcessors
should increase the size of the distributed service thread pool so that other operations performed by the distributed service are not blocked by the long running EntryProcessor
. For more information on the distributed service thread pool, see "DistributedCache Service Parameters".
Coherence includes several EntryProcessor
implementations for common use cases. Further details on these EntryProcessors
, along with additional information on parallel data processing, can be found in "Provide a Data Grid".
The Transaction Framework API allows TCMP clients to perform operations and use queries, aggregators, and entry processors within the context of a transaction. The transactions provide read consistency and atomic guarantees across partitions and caches even with client failure. The framework uses its own concurrency strategy and storage implementation and its own recovery manager for failed transactions.
Note:
TheTransactionMap
API has been deprecated and is superseded by the Transaction Framework API. The two APIs are mutually exclusive.The Transaction Framework API has the following limitations:
Database Integration – For existing Coherence users, the most noticeable limitation is the lack of support for database integration as compared to the existing Partitioned NamedCache
implementation.
Server-Side Functionality – Transactional caches do not support eviction or expiry, though they support garbage collection of older object versions. Backing map listeners, triggers, and CacheStore
modules are not supported.
Explicit Locking and Pessimistic Transactions – Pessimistic/explicit locking (ConcurrentMap
interface) are not supported.
Filters – Filters, such as PartitionedFilter
, LimitFilter
and KeyAssociationFilter
, are not supported.
Synchronous Listener – The SynchronousListener
interface is not supported.
Near Cache – Wrapping a near cache around a transactional cache is not supported.
Key Partitioning Strategy – You cannot specify a custom KeyPartitioningStrategy
for a transactional cache; although, KeyAssociation
or a custom KeyAssociator
works.
The following topics are included in this section:
The Transaction Framework API is also the underling transaction framework for the Coherence JCA resource adapter. For details on using the resource adapter, see "Using the Coherence Resource Adapter".
Transactional caches are specialized distributed caches that provide transactional guarantees. Transactional caches are required whenever performing a transaction using the Transaction Framework API. Transactional caches are not interoperable with non-transactional caches.
At run-time, transactional caches are automatically used with a set of internal transactional caches that provide transactional storage and recovery. Transactional caches also allow default transaction behavior (including the default behavior of the internal transactional caches) to be overridden at run-time.
Transactional caches are defined within a cache configuration file using a <transactional-scheme>
element. A transaction scheme includes many of the same elements and attributes that are available to a distributed cache scheme. For detailed information about the <transactional-scheme>
element and all its subelements, see "transactional-scheme".
Note:
The use of transaction schemes within near cache schemes is currently not supported.The following example demonstrates defining a transactional cache scheme in a cache configuration file. The cache is named MyTxCache
and maps to a <transactional-scheme>
that is named example-transactional
. The cache name can also use the tx-*
convention which allows multiple cache instances to use a single mapping to a transactional cache scheme.
Note:
The <service-name>
element, as shown in the example below, is optional. If no <service-name>
element is included in the transactional cache scheme, TransactionalCache
is used as the default service name. In this case, applications must connect to a transactional service using the default service name. See "Creating Transactional Connections".
Example 27-4 Example Transactional Cache Definition
<caching-scheme-mapping> <cache-mapping> <cache-name>MyTxCache</cache-name> <scheme-name>example-transactional</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <!-- Transactional caching scheme. --> <transactional-scheme> <scheme-name>example-transactional</scheme-name> <service-name>TransactionalCache</service-name> <thread-count>10</thread-count> <request-timeout>30000</request-timeout> <autostart>true</autostart> </transactional-scheme> </caching-schemes>
The <transactional-scheme>
element also supports the use of scheme references. In the below example, a <transactional-scheme>
with the name example-transactional
references a <transactional-scheme>
with the name base-transactional
:
<caching-scheme-mapping> <cache-mapping> <cache-name>tx-*</cache-name> <scheme-name>example-transactional</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <transactional-scheme> <scheme-name>example-transactional</scheme-name> <scheme-ref>base-transactional</scheme-ref> <thread-count>10</thread-count> </transactional-scheme> <transactional-scheme> <scheme-name>base-transactional</scheme-name> <service-name>TransactionalCache</service-name> <request-timeout>30000</request-timeout> <autostart>true</autostart> </transactional-scheme> </caching-schemes>
Applications perform cache operations within a transaction in one of three ways:
Using the NamedCache API – Applications use the NamedCache
API to implicitly perform cache operations within a transaction.
Using the Connection API – Applications use the Connection
API to explicitly perform cache operations within a transaction.
Using the Coherence Resource Adapter – Java EE applications use the Coherence Resource Adapter to connect to a Coherence data cluster and perform cache operations as part of a distributed (global) transaction.
The NamedCache
API can perform cache operations implicitly within the context of a transaction. However, this approach does not allow an application to change default transaction behavior. For example, transactions are in auto-commit mode when using the NamedCache
API approach. Each operation is immediately committed when it successfully completes; multiple operations cannot be scoped into a single transaction. Applications that require more control over transactional behavior must use the Connection
API. See "Using Transactional Connections" for a detailed description of a transaction's default behaviors.
The NamedCache
API approach is ideally suited for ensuring atomicity guarantees when performing single operations such as putAll
. The following example demonstrates a simple client that creates a NamedCache
instance and uses the CacheFactory.getCache()
method to get a transactional cache. The example uses the transactional cache that was defined in Example 27-4. The client performs a putAll
operation that is only committed if all the put
operations succeed. The transaction is automatically rolled back if any put
operation fails.
... String key = "k"; String key2 = "k2"; String key3 = "k3"; String key4 = "k4"; CacheFactory.ensureCluster(); NamedCache cache = CacheFactory.getCache("MyTxCache"); Map map = new HashMap(); map.put(key, "value"); map.put(key2, "value2"); map.put(key3, "value3"); map.put(key4, "value4"); //operations performed on the cache are atomic cache.putAll(map); CacheFactory.shutdown(); ...
The Connection
API is used to perform cache operations within a transaction and provides the ability to explicitly control transaction behavior. For example, applications can enable or disable auto-commit mode or change transaction isolation levels.
The examples in this section demonstrate how to use the Connection
interface, DefaultConnectionFactory
class, and the OptimisticNamedCache
interface which are located in the com.tangosol.coherence.transaction
package. The examples use the transactional cache that was defined in Example 27-4. The Connection
API is discussed in detail following the examples.
Example 27-5 demonstrates an auto-commit transaction; where, two insert
operations are each executed as separate transactions.
Example 27-5 Performing an Auto-Commit Transaction
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); OptimisticNamedCache cache = con.getNamedCache("MytxCache"); cache.insert(key, value); cache.insert(key2, value2); con.close(); ...
Example 27-6 demonstrates a non auto-commit transaction; where, two insert
operations are performed within a single transaction. Applications that use non auto-commit transactions must manually demarcate transaction boundaries.
Example 27-6 Performing a Non Auto-Commit Transaction
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setAutoCommit(false); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.insert(key, value); cache.insert(key2, value2); con.commit(); catch (Exception e) { con.rollback(); throw e; } finally { con.close(); } ...
Example 27-7 demonstrates performing a transaction that spans multiple caches. Each transactional cache must be defined in a cache configuration file.
Example 27-7 Transaction Across Multiple Caches
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setAutoCommit(false); OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); OptimisticNamedCache cache1 = con.getNamedCache("MyTxCache1"); cache.insert(key, value); cache1.insert(key2, value2); con.commit(); con.close(); ...
Note:
Transactions can span multiple partitions and caches within the same service but cannot span multiple services.The com.tangosol.coherence.transaction.DefaultConnectionFactory
class is used to create com.tangosol.coherence.transaction.Connection
instances. The following code from Example 27-5 demonstrates creating a Connection
instance using the factory's no argument constructor:
Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache");
In this example, the first cache configuration file found on the classpath (or specified using the -Dtangosol.coherence.cacheconfig
system property) is used by this Connection
instance. Optionally, a URI can be passed as an argument to the factory class that specifies the location and name of a cache configuration file. For example, the following code demonstrates constructing a connection factory that uses a cache configuration file named cache-config.xml
that is located in a config
directory found on the classpath.
Connection con = new DefaultConnectionFactory("config/cache-config.xml"). createConnection("TransactionalCache");
The DefaultConnectionFactory
class provides methods for creating connections:
createConnection()
– The no-argument method creates a connection that is a member of the default transactional service, which is named TransactionalCache
. Use the no-argument method when the <transactional-scheme>
element being used does not include a specific<service-name>
element. For details on defining transactional cache schemes and specifying the service name, see "Defining Transactional Caches".
createConnection(ServiceName)
– This method creates a connection that is a member of a transactional service. The service name is a String
that indicates the transactional service to which this connection belongs. The ServiceName
maps to a <service-name>
element that is defined within a <transactional-scheme>
element in the cache configuration file. If no service name is used, the default name (TransactionalCache
) is used as the service name. For details on defining transactional cache schemes and specifying the service name, see "Defining Transactional Caches".
createConnection(ServiceName, loader)
– This method also creates a connection that is a member of a transactional service. In addition, it specifies the class loader to use. In the above example, the connection is created by only specifying a service name; in which case, the default class loader is used.
The com.tangosol.coherence.transaction.Connection
interface represents a logical connection to a Coherence service. An active connection is always associated with a transaction. A new transaction implicitly starts when a connection is created and also when a transaction is committed or rolled back.
Transactions that are derived from a connection have several default behaviors that are listed below. The default behaviors balance ease-of-use with performance.
A transaction is automatically committed or rolled back for each cache operation. See "Using Auto-Commit Mode" below.
A transaction uses the read committed isolation level. See "Setting Isolation Levels" below.
A transaction immediately performs operations on the cache. See "Using Eager Mode" below.
A transaction has a default timeout of 300 seconds. See "Setting Transaction Timeout" below.
A connection's default behaviors can be changed using the Connection
instance's methods as required.
Auto-commit mode allows an application to choose whether each cache operation should be associated with a separate transaction or whether multiple cache operations should be executed as a single transaction. Each cache operation is executed in a distinct transaction when auto-commit is enabled; the framework automatically commits or rolls back the transaction after an operation completes and then the connection is associated with a new transaction and the next operation is performed. By default, auto-commit is enabled when a Connection
instance is created.
The following code from Example 27-5 demonstrates insert
operations that are each performed as a separate transaction:
OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.insert(key, value); cache.insert(key2, value2);
Multiple operations are performed as part of a single transaction by disabling auto-commit mode. If auto-commit mode is disabled, an application must manually demarcate transaction boundaries. The following code from Example 27-6 demonstrates insert
operations that are performed within a single transaction:
con.setAutoCommit(false); OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.insert(key, value); cache.insert(key2, value2); con.commit();
An application cannot use the commit()
or rollback()
method when auto-commit mode is enabled. Moreover, if auto-commit mode is enabled while in an active transaction, any work is automatically rolled back.
Isolation levels help control data concurrency and consistency. The Transaction Framework uses implicit write-locks and does not implement read-locks. Any attempt to write to a locked entry results in an UnableToAcquireLockException
; the request does not block. When a transaction is set to eager mode, the exception is thrown immediately. In non-eager mode, exceptions may not be thrown until the statement is flushed, which is typically at the next read or when the transaction commits. See "Using Eager Mode".
The Coherence Transaction Framework API supports the following isolation levels:
READ_COMMITTED
– This is the default isolation level if no level is specified. This isolation level guarantees that only committed data is visible and does not provide any consistency guarantees. This is the weakest of the isolation levels and generally provides the best performance at the cost of read consistency.
STMT_CONSISTENT_READ
– This isolation level provides statement-scoped read consistency which guarantees that a single operation only reads data for the consistent read version that was available at the time the statement began. The version may or may not be the most current data in the cache. See the note below for additional details.
STMT_MONOTONIC_CONSISTENT_READ
– This isolation level provides the same guarantees as STMT_CONSISTENT_READ
, but reads are also guaranteed to be monotonic. A read is guaranteed to return a version equal or greater than any version that was previously encountered while using the connection. Due to the monotinic read guarantee, reads with this isolation may block until the necessary versions are available.
TX_CONSISTENT_READ
– This isolation level provides transaction-scoped read consistency which guarantees that all operations performed in a given transaction read data for the same consistent read version that was available at the time the transaction began. The version may or may not be the most current data in the cache. See the note below for additional details.
TX_MONOTONIC_CONSISTENT_READ
– This isolation level provides the same guarantees as TX_CONSISTENT_READ
, but reads are also guaranteed to be monotonic. A read is guaranteed to return a version equal or greater than any version that was previously encountered while using the connection. Due to the monotinic read guarantee, the initial read in a transaction with this isolation may block until the necessary versions are available.
Note:
Consistent read isolation levels (statement or transaction) may lag slightly behind the most current data in the cache. If a transaction writes and commits a value, then immediately reads the same value in the next transaction with a consistent read isolation level, the updated value may not be immediately visible. If reading the most recent value is critical, then theREAD_COMMITTED
isolation level is required.Isolation levels are set on a Connection
instance and must be set before starting an active transaction. For example:
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setIsolationLevel(STMT_CONSISTENT_READ); ...
Eager mode allows an application to control when cache operations are performed on the cluster. If eager mode is enabled, cache operations are immediately performed on the cluster. If eager mode is disabled, cache operations are deferred, if possible, and queued to be performed as a batch operation. Typically, an operation can only be queued if it does not return a value. An application may be able to increase performance by disabling eager mode.
By default, eager mode is enabled and cache operations are immediately performed on the cluster. The following example demonstrates disabling eager mode.
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setEager(false); ...
The transaction timeout allows an application to control how long a transaction can remain active before it is rolled back. The transaction timeout is associated with the current transaction and any new transactions that are associated with the connection.
The timeout value is specified in seconds. The default timeout value is 300
seconds. The following example demonstrates setting the transaction timeout value.
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setTransactionTimeout(420); ...
The com.tangosol.coherence.transaction.OptimisticNamedCache
interface extends the NamedCache
interface and adds the operations: update()
, delete()
, and insert()
.
All transactional caches are derived from this type. This cache type ensures that an application use the framework's concurrency and data locking implementations.
Note:
OptimisticNamedCache
does not extend any operations from the ConcurrentMap
interface since it uses its own locking strategy.The following code sample from Example 27-5 demonstrates getting a transactional cache called MyTxCache
and performs operations on the cache. For this example, a transactional cache that is named MyTxCache
must be located in the cache configuration file at run-time. For details on defining a transactional cache, see "Defining Transactional Caches".
OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.insert(key, value); cache.insert(key2, value2);
Transactional caches support Portable Object Format (POF) serialization within transactions. POF is enabled within a transactional cache scheme using the <serializer>
element. The following example demonstrates enabling POF serialization in a transactional cache scheme.
<transactional-scheme> <scheme-name>example-transactional</scheme-name> <service-name>TransactionalCache</service-name> <serializer> <instance> <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name> </instance> </serializer> <autostart>true</autostart> </transactional-scheme>
The Transaction Framework API also includes its own POF types which are defined in the txn-pof-config.xml
POF configuration file which is included in coherence.jar
. The POF types are required and must be found at run-time.
To load the transaction POF types at run time, modify an application's POF configuration file and include the txn-pof-config.xml
POF configuration file using the <include>
element. For example:
<pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd"> <user-type-list> <include>coherence-pof-config.xml</include> <include>txn-pof-config.xml</include> </user-type-list> ... </pof-config>
See "Combining Multiple POF Configuration Files" for more information on using the <include>
element to combine POF configuration files.
The Transaction Framework API stores transactional data in internal distributed caches that use backing maps. The data includes versions of all keys and their values for a transactional cache. The framework uses the stored data in roll-back scenarios and also during recovery.
Due to the internal storage requirements, transactional caches have a constant overhead associated with every entry written to the cache. Moreover, transactional caches use multi-version concurrency control, which means that every write operation produces a new row into the cache even if it is an update. Therefore, the Transaction Framework API uses a custom eviction policy to help manage the growth of its internal storage caches. The eviction policy works by determining which versions of an entry can be kept and which versions are eligible for eviction. The latest version for a given key (the most recent) is never evicted. The eviction policy is enforced whenever a configured high-water mark is reached. After the threshold is reached, 25% of the eligible versions are removed.
Note:
The eviction policy does not take the entire transactional storage into account when comparing the high-water mark. Therefore, transactional storage slightly exceeds the high-water mark before the storage eviction policy is notified.
It is possible that storage for a transactional cache exceeds the maximum heap size if the cache is sufficiently broad (large number of distinct keys) since the current entry for a key is never evicted.
Because the storage eviction policy is notified on every write where the measured storage size exceeds the high-water mark, the default high-water mark may have to be increased so that it is larger than the size of the current data set. Otherwise, the eviction policy is notified on every write after the size of the current data set exceeds the high water mark resulting in decreased performance. If consistent reads are not used, the value can be set so that it slightly exceeds the projected size of the current data set since no historical versions is ever read. When using consistent reads, the high-water mark should be high enough to provide for enough historical versions. Use the below formulas to approximate the transactional storage size.
The high-water mark is configured using the <high-units>
element within a transactional scheme definition. The following example demonstrates configuring a high-water mark of 20 MB.
<transactional-scheme> ... <high-units>20M</high-units> ... </trnsactional-scheme>
The following formulas provide a rough estimate of the memory usage for a row in a transactional cache.
For insert operations:
Primary – key(serialized) + key (on-heap size) + value(serialized) + 1095 bytes constant overhead
Backup – key(serialized) + value(serialized) + 530 bytes constant overhead
For updated operations:
Primary – value(serialized) + 685 bytes constant overhead
Backup – value(serialized) + 420 bytes constant overhead
The Transaction Framework API provides Java extend clients with the ability to perform cache operations within a transaction. In this case, the transaction API is used within an entry processor that is located on the cluster. At run time, the entry processor is executed on behalf of the Java client.
The instructions in this section do not include detailed instructions on how to setup and use Coherence*Extend. For those new to Coherence*Extend, see "Setting Up Coherence*Extend" in Oracle Coherence Client Guide. For details on performing transactions from C++ or .NET clients, see "Performing Transactions for C++ Clients" and "Performing Transactions for .NET Clients" in the Oracle Coherence Client Guide.
The following topics are included in this section and are required to perform transactions from Java extend clients:
Transactions are performed using the transaction API within an entry processor that resides on the cluster. The entry processor is executed on behalf of a Java extend client.
Example 27-8 demonstrates an entry processor that performs a simple update
operation within a transaction. At run time, the entry processor must be located on both the client and cluster.
Example 27-8 Entry Processor for Extend Client Transaction
public class MyTxProcessor extends AbstractProcessor { public Object process(InvocableMap.Entry entry) { // obtain a connection and transaction cache ConnectionFactory connFactory = new DefaultConnectionFactory(); Connection conn = connFactory.createConnection("TransactionalCache"); OptimisticNamedCache cache = conn.getNamedCache("MyTxCache"); conn.setAutoCommit(false); // get a value for an existing entry String sValue = (String) cache.get("existingEntry"); // create predicate filter Filter predicate = new EqualsFilter(IdentityExtractor.INSTANCE, sValue); try { // update the previously obtained value cache.update("existingEntry", "newValue", predicate); } catch (PredicateFailedException e) { // value was updated after it was read conn.rollback(); return false; } catch (UnableToAcquireLockException e) { // row is being updated by another tranaction conn.rollback(); return false; } try { conn.commit(); } catch (RollbackException e) { // transaction was rolled back return false; } return true; } }
Transactions require a transactional cache to be defined in the cluster-side cache configuration file. For details on defining a transactional cache, see "Defining Transactional Caches".
The following example defines a transactional cache that is named MyTxCache
, which is the cache name that was used by the entry processor in Example 27-8. The example also includes a proxy scheme and a distributed cache scheme that are required to execute the entry processor from a remote client. The proxy is configured to accept client TCP/IP connections on localhost
at port 9099
.
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>MyTxCache</cache-name> <scheme-name>example-transactional</scheme-name> </cache-mapping> <cache-mapping> <cache-name>dist-example</cache-name> <scheme-name>example-distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <transactional-scheme> <scheme-name>example-transactional</scheme-name> <thread-count>7</thread-count> <high-units>15M</high-units> <task-timeout>0</task-timeout> <autostart>true</autostart> </transactional-scheme> <distributed-scheme> <scheme-name>example-distributed</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <thread-count>5</thread-count> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>9099</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config>
Remote clients require a remote cache to connect to the cluster's proxy and run a transactional entry processor. The remote cache is defined in the client-side cache configuration file.
The following example configures a remote cache to connect to a proxy that is located on localhost
at port 9099
. In addition, the name of the remote cache (dist-example
) must match the name of a cluster-side cache that is used when initiating the transactional entry processor.
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-example</cache-name> <scheme-name>extend</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <remote-cache-scheme> <scheme-name>extend</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>9099</port> </socket-address> </remote-addresses> <connect-timeout>30s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>30s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-cache-scheme> </caching-schemes> </cache-config>
A Java extend client invokes an entry processor as normal. However, at run time, the cluster-side entry processor is invoked. The client is unaware that the invocation has been delegated. The following example demonstrates how a Java client calls the entry processor shown in Example 27-8.
NamedCache cache = CacheFactory.getCache("dist-example"); Object oReturn = cache.invoke("AnyKey", new MyTxProcessor()); System.out.println("Result of extend tx execution: " + oReturn);
The transaction framework leverages the existing Coherence JMX management framework. See Oracle Coherence Management Guide for detailed information on enabling and using JMX in Coherence.
This section describes two MBeans that provide transaction information: CacheMBean
and TransactionManagerMBean
.
The CacheMBean
managed resource provides attributes and operations for all caches, including transactional caches. Many of the MBeans attributes are not applicable to transactional cache; invoking such attributes simply returns a -1 value. A cluster node may have zero or more instances of cache managed beans for transactional caches. The object name uses the form:
type=Cache, service=
service name
, name=
cache name
, nodeId=
cluster node's id
Table 27-2 describes the CacheMBean
attributes that are supported for transactional caches.
Table 27-2 Transactional Cache Supported Attributes
Attribute | Type | Description |
---|---|---|
|
Double |
The average number of milliseconds per |
|
Double |
The average number of milliseconds per |
|
String |
The cache description. |
|
Integer |
The limit of the cache size measured in units. The cache prunes itself automatically after it reaches its maximum unit level. This is often referred to as the high water mark of the cache. |
|
Integer |
The number of entries in the current data set |
|
Long |
The total number of |
|
Long |
The total number of milliseconds spent on |
|
Long |
The total number of |
|
Long |
The total number of milliseconds spent on |
For transactional caches, the resetStatistics
operation is supported and resets all transaction manager statistics.
The TransactionManagerMBean
managed resource is specific to the transactional framework. It provides global transaction manager statics by aggregating service-level statistics from all transaction service instances. Each cluster node has an instance of the transaction manager managed bean per service. The object name uses the form:
type=TransactionManager, service=
service name
, nodeId=
cluster node's id
Note:
For certain transaction manager attributes, the count is maintained at the coordinator node for the transaction, even though multiple nodes may have participated in the transaction. For example, a transaction may include modifications to entries stored on multiple nodes but theTotalCommitted
attribute is only incremented on the MBean on the node that coordinated the commit of that transaction.Table 27-3 describes TransactionManager attributes.
Table 27-3 TransactionManagerMBean Attributes
Attribute | Type | Description |
---|---|---|
|
Long |
The total number of currently active transactions. An active transaction is counted as any transaction that contains at least one modified entry and has yet to be committed or rolled back. Note that the count is maintained at the coordinator node for the transaction, even though multiple nodes may have participated in the transaction. |
|
Long |
The total number of transactions that have been committed by the Transaction Manager since the last time the statistics were reset. Note that the count is maintained at the coordinator node for the transaction being committed, even though multiple nodes may have participated in the transaction. |
|
Long |
The total number of transactions that have been recovered by the Transaction Manager since the last time the statistics were reset. Note that the count is maintained at the coordinator node for the transaction being recovered, even though multiple nodes may have participated in the transaction. |
|
Long |
The total number of transactions that have been rolled back by the Transaction Manager since the last time the statistics were reset. Note that the count is maintained at the coordinator node for the transaction being rolled back, even though multiple nodes may have participated in the transaction. |
|
Long |
The cumulative time (in milliseconds) spent on active transactions. |
|
Long |
The transaction timeout value in milliseconds. Note that this value only applies to transactional connections obtained after the value is set. This attribute is currently not supported. |
The TransactionManagerMBean
includes a single operation called resetStatistics
, which resets all transaction manager statistics.
Coherence includes a J2EE Connector Architecture (J2CA) 1.5 compliant resource adaptor that is used to get connections to a Coherence cache. The resource adapter leverages the connection API of the Coherence Transaction Framework and therefore provides default transaction guarantees. In addition, the resource adapter provides full XA support which allows Coherence to participate in global transactions. A global transaction is unit of work that is managed by one or more resource managers and is controlled and coordinated by an external transaction manager, such as the transaction manager that is included with WebLogic server or OC4J.
The resource adapter is packaged as a standard Resource Adaptor Archive (RAR) and is named coherence-transaction.rar
. The resource adapter is located in COHERENCE_HOME
/lib
and can be deployed to any Java EE container compatible with J2CA 1.5. The resource adapter includes proprietary resource adapter deployment descriptors for WebLogic (weblogic-ra.xml
) and OC4J (oc4j-ra.xml
) and can be deployed to these platforms without modification. Check your application server vendor's documentation for details on defining a proprietary resource adapter descriptor that can be included within the RAR.
Note:
Coherence continues to include thecoherence-tx.rar
resource adapter for backward compatibility. However, it is strongly recommended that applications use the coherence-transaction.rar
resource adapter which provides full XA support. Those accustomed to using the Coherence CacheAdapter
class can continue to do so with either resource adapter. See "Using the Coherence Cache Adapter for Transactions".The following topics are included in this section:
Java EE application components (Servlets, JSPs, and EJBs) use the Coherence resource adapter to perform cache operations within a transaction. The resource adapters supports both local transactions and global transactions. Local transactions are used to perform cache operations within a transaction that is only scoped to a Coherence cache and cannot participate in a global transaction. Global transactions are used to perform cache operations that automatically commit or roll back based on the outcome of multiple resources that are enlisted in the transaction.
Like all JavaEE application components, the Java Naming and Directory Interface (JNDI) API is used to lookup the resource adapter's connection factory. The connection factory is then used to get logical connections to a Coherence cache.
The following examples demonstrate how to use the Coherence resource adapter to perform cache operations within a global transaction. Example 27-9 is an example of using Container Managed Transactions (CMT); where, the container ensures that all methods execute within the scope of a global transaction. Example 27-10 is an example of user-controlled transactions; where, the application component uses the Java Transaction API (JTA) to manually demarcate transaction boundaries.
Transactions require a transactional cache scheme to be defined within a cache configuration file. These examples use the transactional cache that was defined in Example 27-4.
Example 27-9 Performing a Transaction When Using CMT
Context initCtx = new InitialContext();ConnectionFactory cf = (ConnectionFactory) initCtx.lookup("java:comp/env/eis/CoherenceTxCF"); Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); cache.insert("key1", "value1"); } finally { con.close(); }
Example 27-10 Performing a User-Controlled Transaction
Context initCtx = new InitialContext(); ConnectionFactory cf = (ConnectionFactory) initCtx.lookup("java:comp/env/eis/CoherenceTxCF"); UserTransaction ut = (UserTransaction) new InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); cache.insert("key1", "value1"); ut.commit(); } catch (Exception e) { ut.rollback(); throw e; } finally { con.close(); }
Applications use the com.tangosol.coherence.ConnectionFactory
interface to create connections to a Coherence cluster. An instance of this interface is obtained using a JNDI lookup. The following code sample from Example 27-10 performs a JNDI lookup for a connection factory that is bound to the java:comp/env/eis/CoherenceTxCF
namespace:
Context initCtx = new InitialContext();ConnectionFactory cf = (ConnectionFactory) initCtx.lookup("java:comp/env/eis/CoherenceTxCF");
The ConnectionFactory
is then used to create a com.tangosol.coherence.transaction.Connection
instance. The Connection
instance represents a logical connection to a Coherence service:
Connection con = cf.createConnection("TransactionalCache");
The creatConnection(ServiceName
) method creates a connection that is a member of a transactional service. The service name is a String
that indicates which transactional service this connection belongs to and must map to a service name that is defined in a <transactional-scheme>
within a cache configuration file. For details on defining transactional cache schemes and specifying the service name, see "Defining Transactional Caches".
A Connection
instance always has an associated transaction which is scoped within the connection. A new transaction is started when a transaction is completed. The following default behaviors are associated with a connection. For more information on the Connection
interface and changing the default settings, see "Using Transactional Connections".
Connections are in auto-commit mode by default which means that each statement is executed in a distinct transaction and when the statement completes the transaction is committed and the connection is associated with a new transaction.
Note:
When the connection is used for a global transaction, auto-commit mode is disabled and cannot be enabled. Cache operations are performed in a single transaction and either commit or roll back as a unit. In addition, theConnection
interface's commit an rollback methods cannot be used if the connection is enlisted in a global transaction.The connection's isolation level is set to READ_COMMITTED
. The transaction can only view committed data from other transactions.
Eager mode is enabled by default which means every operation is immediately flushed to the cluster and are not queued to be flushed in batches.
The default transaction timeout is 300 seconds.
Note:
When the connection is used for a global transaction, the transaction timeout that is associated with a connection is overridden by the transaction timeout value that is set by a container's JTA configuration. If an application attempts to set the transaction timeout value directly on the connection while it is enlisted in a global transaction, the attempt is ignored and a warning message is emitted indicating that the transaction timeout cannot be set. The original timeout value that is set on the connection is restored after the global transaction completes.The com.tangosol.coherence.transaction.OptimisticNamedCache
interface extends the NamedCache
interface. It supports all the customary named cache operations and adds its own operations for updating, deleting, and inserting objects into a cache. When performing transactions, all cache instances must be derived from this type. The following code sample from Example 27-10 demonstrates getting a named cache called MyTxCache
and performing operations on the cache. The cache must be defined in the cache configuration file.
try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); cache.insert("key1", "value1");
Note:
OptimisticNamedCache
does not extend any operations from the ConcurrentMap
interface since it uses its own locking strategy.Application components that perform user-controlled transactions use a JNDI lookup to get a JTA UserTransaction
interface instance. The interface provide methods for demarcating the transaction. The following code sample from Example 27-10 demonstrates getting a UserTransaction
instance and demarcating the transaction boundaries:
UserTransaction ut = (UserTransaction) new InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); cache.insert("key1", "value1"); ut.commit();
The above code demonstrates a typical scenario where the connection and the named cache exist within the transaction boundaries. However, the resource adapter also supports scenarios where connections are used across transaction boundaries and are obtained before the start of a global transaction. For example:
Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); UserTransaction ut = (UserTransaction) new InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); cache.insert("key1", "value1"); ut.commit();
This section provides instructions for packaging JavaEE applications that use the Coherence resource adapter so that they can be deployed to an application server. The following topics are included in this section:
Application components must provide a resource reference for the resource adapter's connection factory. For EJBs, the resource references are defined in the ejb-jar.xml
deployment descriptor. For Servlets and JSPs, the resource references are defined in the web.xml
deployment descriptor. The following sample demonstrates defining a resource reference for the resource adapter's connection factory and is applicable to the code in Example 27-10:
<resource-ref> <res-ref-name>eis/CoherenceTxCF</res-ref-name> <res-type> com.tangosol.coherence.transaction.ConnectionFactory </res-type> <res-auth>Container</res-auth> </resource-ref>
In addition to the standard Java EE application component deployment descriptors, many application servers require a proprietary deployment descriptor as well. For example, WebLogic server resource references are defined in the weblogic.xml
or weblogic-ejb-jar.xml
files respectively:
<reference-descriptor> <resource-description> <res-ref-name>eis/CoherenceTxCF</res-ref-name> <jndi-name>tangosol.coherenceTx</jndi-name> </resource-description> </reference-descriptor>
Consult your application server vendor's documentation for detailed information on using their proprietary application component deployment descriptors and information on alternate methods for defining resource reference using dependency injection or annotations.
JavaEE applications must provide a module reference for the Coherence resource adapter. The module reference is defined in the EAR's application.xml
file. The module reference points to the location of the Coherence RAR file (coherence-transaction.rar
) within the EAR file. For example, the following definition points to the Coherence resource adapter RAR file that is located in the root of the EAR file:
<application> ... <module> <connector>coherence-transaction.rar</connector> </module> ... </application>
In addition to the standard Java EE application deployment descriptors, many application servers require a proprietary application deployment descriptor as well. For example, the Coherence resource adapter is defined in the WebLogic server weblogic-application.xml
file as follows:
<weblogic-application> <classloader-structure> ... <module-ref> <module-uri>coherence-transaction.rar</module-uri> </module-ref> ... </classloader-structure> </weblogic-application>
Consult your application server vendor's documentation for detailed information on using their proprietary application deployment descriptors
JavaEE applications that use the Coherence resource adapter must include the coherence-transaction.rar
file and the coherence.jar
file within the EAR file. The following example places the libraries at the root of the EAR file:
/ /coherence-transaction.rar /coherence.jar
When deploying to WebLogic server, the coherence.jar
file must be placed in the /APP-INF/lib
directory of the EAR file. For example:
/ /coherence-transaction.rar /APP-INF/lib/coherence.jar
This deployment scenario results in a single Coherence cluster node that is shared by all application components in the EAR. See Oracle Coherence Administrator's Guide for different Coherence deployment options.
The Coherence CacheAdapter
class provides an alternate client approach for creating transactions and is required when using the coherence-tx.rar
resource adapter. The new coherence-transaction.rar
resource adapter also supports the CacheAdapter
class (with some modifications) and allows those accustomed to using the class to leverage the benefits of the new resource adapter. However, it is recommended that applications use the Coherence resource adapter natively which offers stronger transactional support. Examples for both resource adapters is provided in this section.
Example 27-11 demonstrates performing cache operations within a transaction when using the CacheAdapter
class with the new coherence-transaction.rar
resource adapter. For this example a transactional cache named MyTxCache
must be configured in the cache configuration file. The cache must map to a transactional cache scheme with the service name TransactionalCache
. See "Defining Transactional Caches" for more information on defining a transactional cache scheme.
Example 27-11 Using the CacheAdapter Class When Using coherence-transaction.rar
Context initCtx = new InitialContext(); CacheAdapter adapter = new CacheAdapter(initCtx, "java:comp/env/eis/CoherenceTxCCICF", 0, 0, 0); adapter.connect("TransactionalCache", "scott", "tiger"); try { UserTransaction ut = (UserTransaction) new InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); OptimisticNamedCache cache = (OptimisticNamedCache) adapter.getNamedCache("MyTxCache", getClass().getClassLoader()); cache.delete("key", null); cache.insert("key", "value"); ut.commit(); } finally { adapter.close(); }
Example 27-12 demonstrates performing cache operations within a transaction when using the CacheAdapter
class with the coherence-tx.rar
resource adapter.
Example 27-12 Using the CacheAdapter Class When Using coherence-tx.rar
String key = "key"; Context ctx = new InitialContext(); UserTransaction tx = null; try { // the transaction manager from container tx = (UserTransaction) ctx.lookup("java:comp/UserTransaction"); tx.begin(); // the try-catch-finally block below is the block of code // that could be on an EJB and therefore automatically within // a transactional context CacheAdapter adapter = null; try { adapter = new CacheAdapter(ctx, "tangosol.coherenceTx", CacheAdapter.CONCUR_OPTIMISTIC, CacheAdapter.TRANSACTION_GET_COMMITTED, 0); NamedCache cache = adapter.getNamedCache("dist-test", getClass().getClassLoader()); int n = ((Integer)cache.get(key)).intValue(); cache.put(key, new Integer(++n)); } catch (Throwable t) { String sMsg = "Failed to connect: " + t; System.err.println(sMsg); t.printStackTrace(System.err); } finally { try { adapter.close(); } catch (Throwable ex) { System.err.println("SHOULD NOT HAPPEN: " + ex); } } } finally { try { tx.commit(); } catch (Throwable t) { String sMsg = "Failed to commit: " + t; System.err.println(sMsg); } }