6 Best Practices for Coherence*Extend

There are best practices and guidelines to consider when configuring and running Coherence*Extend.

This chapter includes the following sections:

6.1 Do Not Run a Near Cache on a Proxy Server

Running a near cache on a proxy server results in higher heap usage and more network traffic on the proxy nodes with little to no benefit. By definition, a near cache provides local cache access to both recently and often-used data. If a proxy server is configured with a near cache, it locally caches data accessed by its remote clients. It is unlikely that these clients are consistently accessing the same subset of data, thus resulting in a low hit ratio on the near cache. For these reasons, it is recommended that a near cache not be used on a proxy server. To ensure that the proxy server is not running a near cache, remove all near schemes from the cache configuration being used for the proxy.

6.2 Configure Heap NIO Space to be Equal to the Max Heap Size

NIO memory is used for TCP connections into the proxy and for POF serialization and deserialization. The amount of off-heap NIO space should be equal to the maximum heap space.

On Oracle JVMs, NIO memory can be set manually if it is not already set:


6.3 Configure Proxy Service Thread Pooling

You can change the thread pool default settings to optimize client performance. Proxy services use a dynamic thread pool for daemon (worker) threads. The thread pool automatically adds and removes threads based on the number of client requests, total backlog of requests, and the total number of idle threads. The thread pool helps ensure that there are enough threads to meet the demand of extend clients and that resources are not waisted on idle threads.

This section includes the following topics:

6.3.1 Understanding Proxy Service Threading

Each application has different thread requirements based on the number of clients and the amount of operations being performed. Performance should be closely monitored to ensure that there are enough threads to service client requests without saturating clients with too many threads. In addition, log messages are emitted when the thread pool is using its maximum amount of threads, which may indicate additional threads are required.

Client applications are classified into two general categories: active applications and passive applications. In active applications, the extend clients send many requests (put, get, and so on) which are handled by the proxy service. The proxy service requires a large number of threads to sufficiently handle these numerous tasks.

In passive applications, the client waits on events (such as map listeners) based on some specified criteria. Events are handled by a distributed cache service. This service uses worker threads to push events to the client. For these tasks, the thread pool configuration for the distributed cache service should include enough worker threads. See distributed-scheme in Developing Applications with Oracle Coherence.


Near caches on extend clients use map listeners when performing invalidation strategies of ALL, PRESENT, and AUTO. Applications that are write-heavy that use near caches generate many map events.

6.3.2 Setting Proxy Service Thread Pooling Thresholds

To set thread pooling thresholds for a proxy service, add the <thread-count-max> and <thread-count-min> elements within the <proxy-scheme> element. See proxy-scheme in Developing Applications with Oracle Coherence. The following example changes the default pool settings.


  • The thread pool is enabled by default and does not require configuration. The default setup allows Coherence to automatically tune the thread count based on the load at any given point in time. Consider explicitly configuring the thread pool only if the automatic tuning proves insufficient.

  • Setting a minimum and maximum thread count of zero forces the proxy service thread to handle all requests; no worker threads are used. Using the proxy service thread to handle client requests is not a best practice.


The coherence.proxy.threads.max and coherence.proxy.threads.min system properties specify the dynamic thread pooling thresholds instead of using the cache configuration file. For example:


6.3.3 Setting an Exact Number of Threads

In most scenarios, dynamic thread pooling is the best way to ensure that a proxy service always has enough threads to handle requests. In controlled applications where client usage is known, an explicit number of threads can be specified by setting the <thread-count-min> and <thread-count-max> element to the same value. The following example sets 10 threads for use by a proxy service. Additional threads are not created automatically.


6.4 Be Careful When Making InvocationService Calls

You cannot choose the particular node on which invocation code runs when sending the call through a proxy. The InvocationService allows a service member to invoke arbitrary code on any node in the cluster. On Coherence*Extend however, InvocationService calls are serviced by the proxy that the client is connected to by default.

6.5 Be Careful When Placing Collection Classes in the Cache

Collection objects (such as an ArrayList, HashSet, HashMap, and so on) are deserialized as immutable arrays when cached by Coherence*Extend clients. A ClassCastExceptions is returned if the objects are extracted and cast to their original types.

As an alternative, use a Java interface object (such as a List, Set, Map, and so on) or encapsulate the collection object in another object. Both of these techniques are illustrated in the following example:

Example 6-1 Casting an ArrayList Object

public class ExtendExample 
    @SuppressWarnings({ "unchecked" })
    public static void main(String asArgs[])
        System.setProperty("coherence.cacheconfig", "client-config.xml");
        NamedCache cache = CacheFactory.getCache("test");
        // Create a sample collection
        List list  = new ArrayList();
        for (int i = 0; i < 5; i++)
        cache.put("list", list);
        List listFromCache = (List) cache.get("list");
        System.out.println("Type of list put in cache: " + list.getClass());
        System.out.println("Type of list in cache: " + listFromCache.getClass());

        Map map = new TreeMap();
        for (Iterator i = list.iterator(); i.hasNext();)
            Object o = i.next();
            map.put(o, o);
        cache.put("map", map);
        Map mapFromCache = (Map) cache.get("map");
        System.out.println("Type of map put in cache: " + map.getClass());
        System.out.println("Type of map in cache: " + mapFromCache.getClass());

6.6 Configure POF Serializers for Cache Servers

Proxy servers are responsible for deserializing POF data into Java objects. If you run C++ or .NET applications and store data to the cache, then the conversion to Java objects could be viewed as an unnecessary step.

Coherence provides the option of configuring a POF serializer for cache servers and has the effect of storing POF format data directly in the cache.

This can have the following impact on your applications:

  • .NET or C++ clients that only perform puts or gets do not require a Java version of the object. Java versions are still required if deserializing on the server side (for entry processors, cache stores, and so on).

  • POF serializers remove the requirement to serialize/deserialze on the proxy, thus reducing their memory and CPU requirements.

  • Key manipulation within the proxy is discouraged. This could interfere with the Object decoration used by the POF serializer causing the extend client to not recognize the key.

Example 6-2 illustrates a fragment from a cache configuration file, which configures the default POF serializer that is defined in the operational deployment descriptor.

Example 6-2 Configuring a POFSerializer for a Distributed Cache


6.7 Configuring Firewalls for Extend Clients

Firewalls are often used between extend clients and cluster proxies. When using firewalls, the recommended best practice is to configure the proxy to use a range of ports and then open that range of ports in the firewall. In addition, the cluster port (7574 by default) must be opened for TCP if the name service is used. Alternatively, a fixed (non-ephemeral, non-range) port can be used. In this legacy configuration, only the specific fixed port needs to be opened in the firewall, and clients need to be configured to connect directly to the proxy's IP and port.