5 Best Practices for Coherence*Extend

This chapter describes best practices and guidelines for configuring and running Coherence*Extend.

This chapter includes the following sections:

5.1 Run Proxy Servers with Local Storage Disabled

Each server in a partitioned cache, including the proxy server, can store a portion of the data. However, a proxy server has the added overhead of managing potentially unpredictable client work loads which can be expensive in terms of CPU and memory usage. Local storage should be disabled on the proxy server to preserve resources.

There are several ways in which you can disable storage:

Local storage for a proxy server can be enabled or disabled with the tangosol.coherence.distributed.localstorage Java property. For example:

-Dtangosol.coherence.distributed.localstorage=false

You can also disable storage in the cache configuration file using the <local-storage> element. For details, see Developing Applications with Oracle Coherence.

To disable storage for all proxy service instances, modify the <local-storage> setting in the tangosol-coherence-override.xml) file. Example 5-1 illustrates setting <local-storage> to false.

Example 5-1 Disabling Storage

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <services>
         <service id="3">
            <init-params>
               <init-param id="4">
                  <param-name>local-storage</param-name>
                  <param-value system-property="tangosol.coherence.distributed.
                     localstorage">false</param-value>
               </init-param>
            </init-params>
         </service>
      </services>
   </cluster-config>
</coherence>

5.2 Do Not Run a Near Cache on a Proxy Server

By definition, a near cache provides local cache access to both recently and often-used data. If a proxy server is configured with a near cache, it locally caches data accessed by its remote clients. It is unlikely that these clients are consistently accessing the same subset of data, thus resulting in a low hit ratio on the near cache. Running a near cache on a proxy server results in higher heap usage and more network traffic on the proxy nodes with little to no benefit. For these reasons, it is recommended that a near cache not be used on a proxy server. To ensure that the proxy server is not running a near cache, remove all near schemes from the cache configuration being used for the proxy.

5.3 Configure Heap NIO Space to be Equal to the Max Heap Size

NIO memory is used for the TCP connection into the proxy and for POF serialization and deserialization. Older Java installations tended to run out of heap memory because it was configured too low. Newer Java JDKs configure off heap NIO space equal to the maximum heap space. On Sun JVMs, this can also be set manually:

-XX:MaxDirectMemorySize=MAX_HEAP_SIZE

5.4 Configure Proxy Service Thread Pooling

Proxy services use a dynamic thread pool for daemon (worker) threads. The thread pool automatically adds and removes threads based on the number of client requests, total backlog of requests, and the total number of idle threads. The thread pool helps ensure that there are enough threads to meet the demand of extend clients and that resources are not waisted on idle threads. Change the thread pool's default settings to optimize client performance.

This section includes the following topics:

5.4.1 Understanding Proxy Service Threading

Each application has different thread requirements based on the number of clients and the amount of operations being performed. Performance should be closely monitored to ensure that there are enough threads to service client requests without saturating clients with too many threads. In addition, log messages are emitted when the thread pool is using its maximum amount of threads, which may indicate additional threads are required.

Client applications are classified into two general categories: active applications and passive applications. In active applications, the extend clients send many requests (put, get, and so on) which are handled by the proxy service. The proxy service requires a large number of threads to sufficiently handle these numerous tasks.

In passive applications, the client waits on events (such as map listeners) based on some specified criteria. Events are handled by a distributed cache service. This service uses worker threads to push events to the client. For these tasks, the thread pool configuration for the distributed cache service should include enough worker threads. See Developing Applications with Oracle Coherence for details on configuring a distributed service thread count.

Note:

Near caches on extend clients use map listeners when performing invalidation strategies of ALL, PRESENT, and AUTO. Applications that are write-heavy that use near caches generate many map events.

5.4.2 Setting Proxy Service Thread Pooling Thresholds

The default thread pool behavior starts with a thread amount that is equal to 2 times the number of cores and grows to a maximum thread amount that is equal to 8 times the number of cores. A log message is emitted when the configured maximum is reached.

The thread pool is more aggressive creating threads than decreasing threads. If client requests and service backlog increases, additional threads are created every 0.5 second. If too many threads become idle, threads are removed from the thread pool every 60 seconds.

To set thread pooling thresholds for a proxy service, add the <thread-count-max> and <thread-count-min> elements within the <proxy-scheme> element. See Developing Applications with Oracle Coherence for a detailed reference of these elements. The following example changes the default pool settings.

Note:

  • To enable dynamic thread pool sizing, do not set the <thread-count> element for a proxy service. Setting a thread count value disables dynamic thread pool sizing.

  • Setting a maximum thread count of zero, forces the proxy service thread to handle all requests; no worker threads are used. Using the proxy service thread to handle client requests is not a best practice.

<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <thread-count-max>75</thread-count-max>
   <thread-count-min>10</thread-count-min>
   <acceptor-config>
      <tcp-acceptor>
         <local-address>
            <address>localhost</address>
            <port>9099</port>
         </local-address>
      </tcp-acceptor>
   </acceptor-config>
   <autostart>true</autostart>
</proxy-scheme>

The tangosol.coherence.proxy.threads.max and tangosol.coherence.proxy.threads.min system properties specify the dynamic thread pooling thresholds instead of using the cache configuration file. For example:

-Dtangosol.coherence.proxy.threads.max=75
tangosol.coherence.proxy.threads.min=10

5.4.3 Setting an Exact Number of Threads

In most scenarios, dynamic thread pooling is the best way to ensure that a proxy service always has enough threads to handle requests. In controlled applications where client usage is known, dynamic thread pooling can be disabled by specifying an explicit number of threads using the <thread-count> element. The following example sets 10 threads for use by a proxy service. Additional threads are not created automatically.

<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <thread-count>10</thread-count>
   <acceptor-config>
      <tcp-acceptor>
         <local-address>
            <address>localhost</address>
            <port>9099</port>
         </local-address>
      </tcp-acceptor>
   </acceptor-config>
   <autostart>true</autostart>
</proxy-scheme>

5.5 Be Careful When Making InvocationService Calls

InvocationService allows a member of a service to invoke arbitrary code on any node in the cluster. On Coherence*Extend however, InvocationService calls are serviced by the proxy that the client is connected to by default. You cannot choose the particular node on which the code runs when sending the call through a proxy.

5.6 Be Careful When Placing Collection Classes in the Cache

If a Coherence*Extend client puts a collection object, (such as an ArrayList, HashSet, HashMap, and so on) directly into the cache, it is deserialized as an immutable array. If you then extract it and cast it to its original type, then a ClassCastExceptions is returned. As an alternative, use a Java interface object (such as a List, Set, Map, and so on) or encapsulate the collection object in another object. Both of these techniques are illustrated in the following example:

Example 5-2 Casting an ArrayList Object

public class ExtendExample 
    {
    @SuppressWarnings({ "unchecked" })
    public static void main(String asArgs[])
        {
        System.setProperty("tangosol.coherence.cacheconfig", "client-config.xml");
        NamedCache cache = CacheFactory.getCache("test");
        
        // Create a sample collection
        List list  = new ArrayList();
        for (int i = 0; i < 5; i++)
            {
            list.add(String.valueOf(i));
            }
        cache.put("list", list);
        
        List listFromCache = (List) cache.get("list");
        
        System.out.println("Type of list put in cache: " + list.getClass());
        System.out.println("Type of list in cache: " + listFromCache.getClass());

        Map map = new TreeMap();
        for (Iterator i = list.iterator(); i.hasNext();)
            {
            Object o = i.next();
            map.put(o, o);
            }
        cache.put("map", map);
        
        Map mapFromCache = (Map) cache.get("map");
        
        System.out.println("Type of map put in cache: " + map.getClass());
        System.out.println("Type of map in cache: " + mapFromCache.getClass());
        }
    }

5.7 Configure POF Serializers for Cache Servers

Proxy servers are responsible for deserializing POF data into Java objects. If you run C++ or .NET applications and store data to the cache, then the conversion to Java objects could be viewed as an unnecessary step. Coherence provides the option of configuring a POF serializer for cache servers and has the effect of storing POF format data directly in the cache.

This can have the following impact on your applications:

  • .NET or C++ clients that only perform puts or gets do not require a Java version of the object. Java versions are still required if deserializing on the server side (for entry processors, cache stores, and so on).

  • POF serializers remove the requirement to serialize/deserialze on the proxy, thus reducing their memory and CPU requirements.

Example 5-3 illustrates a fragment from a cache configuration file, which configures the default POF serializer that is defined in the operational deployment descriptor.

Example 5-3 Configuring a POFSerializer for a Distributed Cache

...
<distributed-scheme>
   <scheme-name>dist-default</scheme-name>
   <serializer>pof</serializer>
   <backing-map-scheme>
      <local-scheme/>
   </backing-map-scheme>
   <autostart>true</autostart>
</distributed-scheme>
...

5.8 Use Node Locking Instead of Thread Locking

Note:

The NamedCache lock API's are deprecated. Use the locking support that is provided by the entry processor API instead (EntryProcessor for Java and C++, IEntryProcessor for .NET).

Coherence*Extend clients can send lock, put, and unlock requests to the cluster. The proxy holds the locks for the client. The requests for locking and unlocking can be issued at the thread level or the node level. In thread level locking, a particular thread instance belonging to the proxy (Thread 1, for example) issues the lock request. If any other threads (Thread 3, for example) issue an unlock request, they are ignored. A successful unlock request can be issued only by the thread that issued the initial lock request. This can cause application errors since unlock requests do not succeed unless the original thread that issues the lock is also the one that receives the request to release the lock.

In node level locking, if a particular thread instance belonging to the proxy (Thread 1, for example) issues the lock request, then any other thread (Thread 3, for example) can successfully issue an unlock request.