This chapter includes the following sections:
ArrayList
, HashSet
, HashMap
, and so on) are deserialized as immutable arrays when cached by Coherence*Extend clients.Parent topic: Getting Started
On Oracle JVMs, NIO memory can be set manually if it is not already set:
-XX:MaxDirectMemorySize=MAX_HEAP_SIZE
This section includes the following topics:
Each application has different thread requirements based on the number of clients and the amount of operations being performed. Performance should be closely monitored to ensure that there are enough threads to service client requests without saturating clients with too many threads. In addition, log messages are emitted when the thread pool is using its maximum amount of threads, which may indicate additional threads are required.
Client applications are classified into two general categories: active applications and passive applications. In active applications, the extend clients send many requests (put, get, and so on) which are handled by the proxy service. The proxy service requires a large number of threads to sufficiently handle these numerous tasks.
In passive applications, the client waits on events (such as map listeners) based on some specified criteria. Events are handled by a distributed cache service. This service uses worker threads to push events to the client. For these tasks, the thread pool configuration for the distributed cache service should include enough worker threads. See distributed-scheme in Developing Applications with Oracle Coherence.
Note:
Near caches on extend clients use map listeners when performing invalidation strategies of ALL
, PRESENT
, and AUTO
. Applications that are write-heavy that use near caches generate many map events.
To set thread pooling thresholds for a proxy service, add the <thread-count-max>
and <thread-count-min>
elements within the <proxy-scheme>
element. See proxy-scheme in Developing Applications with Oracle Coherence. The following example changes the default pool settings.
Note:
The thread pool is enabled by default and does not require configuration. The default setup allows Coherence to automatically tune the thread count based on the load at any given point in time. Consider explicitly configuring the thread pool only if the automatic tuning proves insufficient.
Setting a minimum and maximum thread count of zero forces the proxy service thread to handle all requests; no worker threads are used. Using the proxy service thread to handle client requests is not a best practice.
<proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <thread-count-max>75</thread-count-max> <thread-count-min>10</thread-count-min> <autostart>true</autostart> </proxy-scheme>
The coherence.proxy.threads.max
and coherence.proxy.threads.min
system properties specify the dynamic thread pooling thresholds instead of using the cache configuration file. For example:
-Dcoherence.proxy.threads.max=75 -Dcoherence.proxy.threads.min=10
In most scenarios, dynamic thread pooling is the best way to ensure that a proxy service always has enough threads to handle requests. In controlled applications where client usage is known, an explicit number of threads can be specified by setting the <thread-count-min>
and <thread-count-max>
element to the same value. The following example sets 10 threads for use by a proxy service. Additional threads are not created automatically.
<proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <thread-count-min>10</thread-count-min> <thread-count-max>10</thread-count-max> <autostart>true</autostart> </proxy-scheme>
InvocationService
allows a service member to invoke arbitrary code on any node in the cluster. On Coherence*Extend however, InvocationService
calls are serviced by the proxy that the client is connected to by default.ArrayList
, HashSet
, HashMap
, and so on) are deserialized as immutable arrays when cached by Coherence*Extend clients. A ClassCastExceptions
is returned if the objects are extracted and cast to their original types.As an alternative, use a Java interface object (such as a List
, Set
, Map
, and so on) or encapsulate the collection object in another object. Both of these techniques are illustrated in the following example:
Example 6-1 Casting an ArrayList Object
public class ExtendExample { @SuppressWarnings({ "unchecked" }) public static void main(String asArgs[]) { System.setProperty("coherence.cacheconfig", "client-config.xml"); NamedCache cache = CacheFactory.getCache("test"); // Create a sample collection List list = new ArrayList(); for (int i = 0; i < 5; i++) { list.add(String.valueOf(i)); } cache.put("list", list); List listFromCache = (List) cache.get("list"); System.out.println("Type of list put in cache: " + list.getClass()); System.out.println("Type of list in cache: " + listFromCache.getClass()); Map map = new TreeMap(); for (Iterator i = list.iterator(); i.hasNext();) { Object o = i.next(); map.put(o, o); } cache.put("map", map); Map mapFromCache = (Map) cache.get("map"); System.out.println("Type of map put in cache: " + map.getClass()); System.out.println("Type of map in cache: " + mapFromCache.getClass()); } }
Coherence provides the option of configuring a POF serializer for cache servers and has the effect of storing POF format data directly in the cache.
This can have the following impact on your applications:
.NET or C++ clients that only perform puts or gets do not require a Java version of the object. Java versions are still required if deserializing on the server side (for entry processors, cache stores, and so on).
POF serializers remove the requirement to serialize/deserialze on the proxy, thus reducing their memory and CPU requirements.
Key manipulation within the proxy is discouraged. This could interfere with the Object decoration used by the POF serializer causing the extend client to not recognize the key.
Example 6-2 illustrates a fragment from a cache configuration file, which configures the default POF serializer that is defined in the operational deployment descriptor.
Example 6-2 Configuring a POFSerializer for a Distributed Cache
... <distributed-scheme> <scheme-name>dist-default</scheme-name> <serializer>pof</serializer> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> ...