5 Production Checklist

There are many production related issues to consider when moving a Coherence solution from a development or test environment to a production environment. The production checklist provides a comprehensive set of best practices that can be implemented as required to ensure a smooth transition to a production environment. Always test your Coherence solution in the production environment to identify potential resource and performance issues.

This chapter includes the following sections:

Network Performance Test and Multicast Recommendations

Configured and test network communication.

Test TCP Network Performance

Run the message bus test utility to test the actual network speed and determine its capability for pushing large amounts TCP messages. Any production deployment should be preceded by a successful run of the message bus test. See Running the Message Bus Test Utility. A TCP stack is typically already configured for a network and requires no additional configuration for Coherence. If TCP performance is unsatisfactory, consider changing TCP settings. See TCP Considerations.

Test Datagram Network Performance

Run the datagram test utility to test the actual network speed and determine its capability for pushing datagram messages. Any production deployment should be preceded by a successful run of both tests. See Performing a Network Performance Test. Furthermore, the datagram test utility must be run with an increasing ratio of publishers to consumers, since a network that appears fine with a single publisher and a single consumer may completely fall apart as the number of publishers increases.

Consider the Use of Multicast

The term multicast refers to the ability to send a packet of information from one server and to have that packet delivered in parallel by the network to many servers. Coherence supports both multicast and multicast-free clustering. The use of multicast can be used to ease cluster configuration. However, the use of multicast may not always be possible for several reasons:

  • Some organizations disallow the use of multicast.

  • Multicast cannot operate over certain types of network equipment; for example, many WAN routers disallow or do not support multicast traffic.

  • Multicast is occasionally unavailable for technical reasons; for example, some switches do not support multicast traffic.

Run the multicast test to verify that multicast is working and to determine the correct (the minimum) TTL value for the production environment. Any production deployment should be preceded by a successful run of the multicast test. See Performing a Multicast Connectivity Test.

Applications that cannot use multicast for deployment must use unicast and the well known addresses feature. See Using Well Known Addresses in Developing Applications with Oracle Coherence.

Configure Network Devices

Network devices may require configuration even if all network performance tests and the multicast test pass without incident and the results are perfect. See Network Tuning.

Changing the Default Cluster Port

The default cluster port is 7574 and for most use cases does not need to be changed. This port number, or any other selected port number, must not be within the operating system ephemeral port range. Ephemeral ports can be randomly assigned to other processes and can result in Coherence not being able to bind to the port during startup. On most operating systems, the ephemeral port range typically starts at 32,768 or higher. Some versions of Linux, such as Red Hat, have a much lower ephemeral port range and additional precautions must be taken to avoid random bind failures.

On Linux the ephemeral port range can be queried as follows:

sysctl net.ipv4.ip_local_port_range

sysctl net.ipv4.ip_local_reserved_ports

The first command shows the range as two space separated values indicating the start and end of the range. The second command shows exclusions from the range as a comma separated list of reserved ports, or reserved port ranges (for example, (1,2,10-20,40-50, and so on).

If the desired port is in the ephemeral range and not reserved, you can modify the reserved set and optionally narrow the ephemeral port range. This can be done as root be editing /etc/sysctl.conf. For example:

net.ipv4.ip_local_port_range = 9000 65000
net.ipv4.ip_local_reserved_ports = 7574

After editing the file you can then trigger a reload of the settings by running:

sysctl -p

Network Recommendations

Test the production network.

Ensure a Consistent IP Protocol

It is suggested that cluster members share the same setting for the java.net.preferIPv4Stack property. In general, this property does not need to be set. If there are multiple clusters running on the same machine and they share a cluster port, then the clusters must also share the same value for this setting. In rare circumstances, such as running multicast over the loopback address, this setting may be required.

Test in a Clustered Environment

After the POC or prototype stage is complete, and until load testing begins, it is not out of the ordinary for an application to be developed and tested by engineers in a non-clustered form. Testing primarily in the non-clustered configuration can hide problems with the application architecture and implementation that appear later in staging or even production.

Make sure that the application has been tested in a clustered configuration before moving to production. There are several ways for clustered testing to be a natural part of the development process; for example:

  • Developers can test with a locally clustered configuration (at least two instances running on their own computer). This works well with the TTL=0 setting, since clustering on a single computer works with the TTL=0 setting.

  • Unit and regression tests can be introduced that run in a test environment that is clustered. This may help automate certain types of clustered testing that an individual developer would not always remember (or have the time) to do.

Evaluate the Production Network's Speed for both UDP and TCP

Most production networks are based on 10 Gigabit Ethernet (10GbE), with some still built on Gigabit Ethernet (GbE) and 100Mb Ethernet. For Coherence, GbE and 10GbE are suggested and 10GbE is recommended. Most servers support 10GbE, and switches are economical, highly available, and widely deployed.

It is important to understand the topology of the production network, and what the devices are used to connect all of the servers that run Coherence. For example, if there are ten different switches being used to connect the servers, are they all the same type (make and model) of switch? Are they all the same speed? Do the servers support the network speeds that are available?

In general, all servers should share a reliable, fully switched network. This generally implies sharing a single switch (ideally, two parallel switches and two network cards per server for availability). There are two primary reasons for this. The first is that using multiple switches almost always results in a reduction in effective network capacity. The second is that multi-switch environments are more likely to have network partitioning events where a partial network failure results in two or more disconnected sets of servers. While partitioning events are rare, Coherence cache servers ideally should share a common switch.

To demonstrate the impact of multiple switches on bandwidth, consider several servers plugged into a single switch. As additional servers are added, each server receives dedicated bandwidth from the switch backplane. For example, on a fully switched gigabit backplane, each server receives a gigabit of inbound bandwidth and a gigabit of outbound bandwidth for a total of 2Gbps full duplex bandwidth. Four servers would have an aggregate of 8Gbps bandwidth. Eight servers would have an aggregate of 16Gbps. And so on up to the limit of the switch (in practice, usually in the range of 160-192Gbps for a gigabit switch). However, consider the case of two switches connected by a 4Gbps (8Gbps full duplex) link. In this case, as servers are added to each switch, they have full mesh bandwidth up to a limit of four servers on each switch (that is, all four servers on one switch can communicate at full speed with the four servers on the other switch). However, adding additional servers potentially create a bottleneck on the inter-switch link. For example, if five servers on one switch send data to five servers on the other switch at 1Gbps per server, then the combined 5Gbps is restricted by the 4Gbps link. Note that the actual limit may be much higher depending on the traffic-per-server and also the portion of traffic that must actually move across the link. Also note that other factors such as network protocol overhead and uneven traffic patterns may make the usable limit much lower from an application perspective.

Avoid mixing and matching network speeds: make sure that all servers connect to the network at the same speed and that all of the switches and routers between those servers run at that same speed or faster.

Plan for Sustained Network Outages

The Coherence cluster protocol can detect and handle a wide variety of connectivity failures. The clustered services are able to identify the connectivity issue and force the offending cluster node to leave and re-join the cluster. In this way the cluster ensures a consistent shared state among its members. See Death Detection Recommendations and Deploying to Cisco Switches.

Plan for Firewall Port Configuration

Coherence clusters members that are located outside of a firewall must be able to communicate with cluster members that are located within the firewall. Configure the firewall to allow Coherence communication as required. The following list shows common default ports and additional areas where ports are configured.

Note:

In general, using a firewall within a cluster (even between TCMP clients and TCMP servers) is an anti-pattern as it is very easy to mis-configure and prone to reliability issues that can be hard to troubleshoot in a production environment. By definition, any member within a cluster should be considered trusted. Untrusted members should not be allowed into the cluster and should connect as Coherence extend clients or using a services layer (HTTP, SOA, and so on).
  • cluster port: The default cluster port is 7574. The cluster port should be open in the firewall for both UDP and TCP traffic.

  • unicast ports: Unicast uses TMB (default) and UDP. Each cluster member listens on one UDP and one TCP port and both ports need to be opened in the firewall. The default unicast ports are automatically assigned from the operating system's available ephemeral port range. For clusters that need to communicate across a firewall, a range of ports can be specified for coherence to operate within. Using a range rather then a specific port allows multiple cluster members to reside on the same machine and use a common configuration. See Specifying a Cluster Member's Unicast Address in Developing Applications with Oracle Coherence.

  • port 7: The default TCP port of the IpMonitor component that is used for detecting hardware failure of cluster members. Coherence doesn't bind to this port, it only tries to connect to it as a means of pinging remote machines. The port needs to be open in order for Coherence to do health monitoring checks.

  • Proxy service ports: The proxy listens by default on the same TCP port as the unicast port. For firewall-based configurations, this can be restricted to a range of ports which can then be opened in the firewall. Using a range of ports allows multiple cluster members to be run on the same machine and share a single common configuration. See Defining a Single Proxy Service Instance in Developing Remote Clients for Oracle Coherence.

  • Coherence REST ports: Any number of TCP ports that are used to allow remote connections from Coherence REST clients. See Deploying Coherence REST in Developing Remote Clients for Oracle Coherence.

Ensure that IP Masquerading (IPMASQ) is Not Enabled

IP masquerading rules block some types of traffic that Coherence requires to form clusters. If you are not able to form clusters, check for the issue using the following command:
# iptables -t nat -v  -L POST_public_allow -n
Chain POST_public_allow (1 references)
pkts bytes target     prot opt in     out     source               destination
164K   11M MASQUERADE  all  --  *      !lo     0.0.0.0/0            0.0.0.0/0
   0     0 MASQUERADE  all  --  *      !lo     0.0.0.0/0            0.0.0.0/0

If you see an output similar to the above example, you need to remove them. You can remove the entries using this command:

# iptables -t nat -v -D POST_public_allow 1

You will need to run the command for each line. So in the example above, you will need to run it twice. After you are done, run the previous command again to verify that the output is an empty list. After you make this change, restart the cluster. You can now form the Coherence cluster correctly.

Cache Size Calculation Recommendations

Calculate the approximate size of a cache. Understanding what size cache is required can help determine how many JVMs, how much physical memory, and how many CPUs and servers are required. Hardware and JVM recommendations are provided later in this chapter.

The recommendations in this section are only guidelines: an accurate view of size can only be validated through specific tests that take into account an application's load and use cases that simulate expected users volumes, transactions profiles, processing operations, and so on.

As a starting point, allocate at least 3x the physical heap size as the data set size, assuming that you are going to keep 1 backup copy of primary data. To make a more accurate calculation, the size of a cache can be calculated as follows and also assumes 1 backup copy of primary data:

Cache Capacity = Number of entries * 2 * Entry Size

Where:

Entry Size = Serialized form of the key + Serialized form of the Value + 150 bytes

For example, consider a cache that contains 5 million objects, where the value and key serialized are 100 bytes and 2kb, respectively.

Calculate the entry size:

100 bytes + 2048 bytes + 150 bytes = 2298 bytes

Then, calculate the cache capacity:

5000000 * 2 * 2298 bytes = 21,915 MB

If indexing is used, the index size must also be taken into account. Un-ordered cache indexes consist of the serialized attribute value and the key. Ordered indexes include additional forward and backward navigation information.

Indexes are stored in memory. Each node will require 2 additional maps (instances of java.util.HashMap) for an index: one for a reverse index and one for a forward index. The reverse index size is a cardinal number for the value (size of the value domain, that is, the number of distinct values). The forward index size is of the key set size. The extra memory cost for the HashMap is about 30 bytes. Extra cost for each extracted indexed value is 12 bytes (the object reference size) plus the size for the value itself.

For example, the extra size for a Long value is 20 bytes (12 bytes + 8 bytes) and for a String is 12 bytes + the string length. There is also an additional reference (12 bytes) cost for indexes with a large cardinal number and a small additional cost (about 4 bytes) for sorted indexes. Therefore, calculate an approximate index cost as:

Index size = forward index map + backward index map + reference + value size

For an indexed Long value of large cardinal, it's going to be approximately:

30 bytes + 30 bytes + 12 bytes + 8 bytes = 80 bytes

For an indexed String of an average length of 20 chars it's going to be approximately:

30 bytes + 30 bytes + 12 bytes + (20 bytes * 2) = 112 bytes

The index cost is relatively high for small objects, but it's constant and becomes less and less expensive for larger objects.

Sizing a cache is not an exact science. Assumptions on the size and maximum number of objects have to be made. A complete example follows:

  • Estimated average entry size = 1k

  • Estimated maximum number of cache objects = 100k

  • String indexes of 20 chars = 5

Calculate the index size:

5 * 112 bytes * 100k = 56MB

Then, calculate the cache capacity:

100k * 2 * 1k + 56MB = ~312MB

Each JVM stores on-heap data itself and requires some free space to process data. With a 1GB heap this will be approximately 300MB or more. The JVM process address space for the JVM – outside of the heap is also approximately 200MB. Therefore, to store 312MB of data requires the following memory for each node in a 2 node JVM cluster:

312MB (for data) + 300MB (working JVM heap) + 200MB (JVM executable) = 812MB (of physical memory)

Note that this is the minimum heap space that is required. It is prudent to add additional space, to take account of any inaccuracies in your estimates, about 10%, and for growth (if this is anticipated). Also, adjust for M+N redundancy. For example, with a 12 member cluster that needs to be able to tolerate a loss of two servers, the aggregate cache capacity should be based on 10 servers and not 12.

With the addition of JVM memory requirements, the complete formula for calculating memory requirements for a cache can be written as follows:

Cache Memory Requirement = (Size of cache entries * 2 (for primary and backup)) + Size of indexes + JVM working memory (~30% of 1GB JVM)

Hardware Recommendations

Understand the hardware requirements and test the hardware accordingly.

Plan Realistic Load Tests

Development typically occurs on relatively fast workstations. Moreover, test cases are usually non-clustered and tend to represent single-user access (that is, only the developer). In such environments the application may seem extraordinarily responsive.

Before moving to production, ensure that realistic load tests have been routinely run in a cluster configuration with simulated concurrent user load.

Develop on Adequate Hardware Before Production

Coherence is compatible with all common workstation hardware. Most developers use PC or Apple hardware, including notebooks, desktops and workstations.

Developer systems should have a significant amount of RAM to run a modern IDE, debugger, application server, database and at least two cluster instances. Memory utilization varies widely, but to ensure productivity, the suggested minimum memory configuration for developer systems is 2GB.

Select a Server Hardware Platform

Oracle works to support the hardware that the customer has standardized on or otherwise selected for production deployment.

  • Oracle has customers running on virtually all major server hardware platforms. The majority of customers use "commodity x86" servers, with a significant number deploying Oracle SPARC and IBM Power servers.

  • Oracle continually tests Coherence on "commodity x86" servers, both Intel and AMD.

  • Intel, Apple and IBM provide hardware, tuning assistance and testing support to Oracle.

If the server hardware purchase is still in the future, the following are suggested for Coherence:

It is strongly recommended that servers be configured with a minimum of 32GB of RAM. For applications that plan to store massive amounts of data in memory (tens or hundreds of gigabytes, or more), evaluate the cost-effectiveness of 128GB or even 256GB of RAM per server. Also, note that a server with a very large amount of RAM likely must run more Coherence nodes (JVMs) per server to use that much memory, so having a larger number of CPU cores helps. Applications that are data-heavy require a higher ratio of RAM to CPU, while applications that are processing-heavy require a lower ratio.

A minimum of 1000Mbps for networking (for example, Gigabit Ethernet or better) is strongly recommended. NICs should be on a high bandwidth bus such as PCI-X or PCIe, and not on standard PCI.

Plan the Number of Servers

Coherence is primarily a scale-out technology. The natural mode of operation is to span many servers (for example, 2-socket or 4-socket commodity servers). However, Coherence can also effectively scale-up on a small number of large servers by using multiple JVMs per server. Failover and failback are more efficient the more servers that are present in the cluster and the impact of a server failure is lessened. A cluster should contain a minimum of four physical servers to minimize the possibility of data loss during a failure. In most WAN configurations, each data center has independent clusters (usually interconnected by Extend-TCP). This increases the total number of discrete servers (four servers per data center, multiplied by the number of data centers).

Coherence is often deployed on smaller clusters (one, two or three physical servers) but this practice has increased risk if a server failure occurs under heavy load. In addition, Coherence clusters are ideally confined to a single switch (for example, fewer than 96 physical servers). In some use cases, applications that are compute-bound or memory-bound applications (as opposed to network-bound) may run acceptably on larger clusters. See Evaluate the Production Network's Speed for both UDP and TCP.

Also, given the choice between a few large JVMs and a lot of small JVMs, the latter may be the better option. There are several production environments of Coherence that span hundreds of JVMs. Some care is required to properly prepare for clusters of this size, but smaller clusters of dozens of JVMs are readily achieved.

Decide How Many Servers are Required Based on JVMs Used

The following rules should be followed in determining how many servers are required for reliable high availability configuration and how to configure the number of storage-enabled JVMs.

  • There must be more than two servers. A grid with only two servers stops being machine-safe as soon as several JVMs on one server are different than the number of JVMs on the other server; so, even when starting with two servers with equal number of JVMs, losing one JVM forces the grid out of machine-safe state. If the number of JVMs becomes unequal it may be difficult for Coherence to assign partitions in a way that ensures both equal per-member utilization as well as the placement of primary and backup copies on different machines. As a result, the recommended best practice is to use more than two physical servers.

  • For a server that has the largest number of JVMs in the cluster, that number of JVMs must not exceed the total number of JVMs on all the other servers in the cluster.

  • A server with the smallest number of JVMs should run at least half the number of JVMs as a server with the largest number of JVMs; this rule is particularly important for smaller clusters.

  • The margin of safety improves as the number of JVMs tends toward equality on all computers in the cluster; this is more of a general practice than the preceding rules.

Operating System Recommendations

Select and configure an operating system.

Selecting an Operating System

Oracle tests on and supports the following operating systems:

  • Various Linux distributions

  • Sun Solaris

  • IBM AIX

  • Windows

  • Mac

  • OS/400

  • z/OS

  • HP-UX

  • Various BSD UNIX distributions

For commodity x86 servers, Linux distributions (Linux 2.6 kernel or higher) are recommended. While it is expected that most Linux distributions provide a good environment for running Coherence, the following are recommended by Oracle: Oracle Linux (including Oracle Linux with the Unbreakable Enterprise Kernel), Red Hat Enterprise Linux (version 4 or later), and Suse Linux Enterprise (version 10 or later).

Review and follow the instructions in Platform-Specific Deployment Considerations for the operating system on which Coherence is deployed.

Note:

The development and production operating systems may be different. Make sure to regularly test the target production operating system.

Avoid using virtual memory (paging to disk)

In a Coherence-based application, primary data management responsibilities (for example, Dedicated Cache Servers) are hosted by Java-based processes. Modern Java distributions do not work well with virtual memory. In particular, garbage collection (GC) operations may slow down by several orders of magnitude if memory is paged to disk. A properly tuned JVM can perform full GCs in less than a second. However, this may grow to many minutes if the JVM is partially resident on disk. During garbage collection, the node appears unresponsive for an extended period and the choice for the rest of the cluster is to either wait for the node (blocking a portion of application activity for a corresponding amount of time) or to consider the unresponsive node as failed and perform failover processing. Neither of these outcomes are a good option, and it is important to avoid excessive pauses due to garbage collection. JVMs should be configured with a set heap size to ensure that the heap does not deplete the available RAM memory. Also, periodic processes (such as daily backup programs) should be monitored to ensure that memory usage spikes do not cause Coherence JVMs to be paged to disk.

See also: Swapping.

Increase Socket Buffer Sizes

The operating system socket buffers must be large enough to handle the incoming network traffic while your Java application is paused during garbage collection. Most versions of UNIX have a very low default buffer limit, which should be increased to 2MB.

See also: Socket Buffer Sizes.

JVM Recommendations

Select and configure a JVM. During development, developers typically use the latest Oracle HotSpot JVM or a direct derivative such as the Mac OS X JVM. The main issues related to using a different JVM in production are:
  • Command line differences, which may expose problems in shell scripts and batch files;

  • Logging and monitoring differences, which may mean that tools used to analyze logs and monitor live JVMs during development testing may not be available in production;

  • Significant differences in optimal garbage collection configuration and approaches to tuning;

  • Differing behaviors in thread scheduling, garbage collection behavior and performance, and the performance of running code.

Make sure that regular testing has occurred on the JVM that is used in production.

Selecting a JVM

For the minimum supported JVM version, refer to System Requirements in Installing Oracle Coherence.

Often the choice of JVM is also dictated by other software. For example:

  • IBM only supports IBM WebSphere running on IBM JVMs. Most of the time, this is the IBM "Sovereign" or "J9" JVM, but when WebSphere runs on Oracle Solaris/Sparc, IBM builds a JVM using the Oracle JVM source code instead of its own.

  • Oracle WebLogic and Oracle Exalogic include specific JVM versions.

  • Apple Mac OS X, HP-UX, IBM AIX and other operating systems only have one JVM vendor (Apple, HP, and IBM respectively).

  • Certain software libraries and frameworks have minimum Java version requirements because they take advantage of relatively new Java features.

On commodity x86 servers running Linux or Windows, use the Oracle HotSpot JVM. Generally speaking, the recent update versions should be used.

Note:

Test and deploy using the latest supported Oracle HotSpot JVM based on your platform and Coherence version.

Before going to production, a JVM vendor and version should be selected and well tested, and absent any flaws appearing during testing and staging with that JVM, that should be the JVM that is used when going to production. For applications requiring continuous availability, a long-duration application load test (for example, at least two weeks) should be run with that JVM before signing off on it.

Review and follow the instructions in Platform-Specific Deployment Considerations for the JVM on which Coherence is deployed.

Setting the JVM Options

JVM configuration options vary over versions and between vendors, but the following are generally suggested.

  • Using the -server option results in substantially better performance.

  • Using identical heap size values for both -Xms and -Xmx yields substantially better performance on Oracle HotSpot JVM and "fail fast" memory allocation.

  • Using Garbage First Garbage Collector (G1GC) results in better garbage collection performance: -XX:+UseG1GC.

  • Monitor garbage collection– especially when using large heaps: -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime

  • JVMs that experience an OutOfMemoryError can be left in an indeterministic state which can have adverse effects on a cluster. Configure JVMs to exit upon encountering an OutOfMemoryError instead of allowing the JVM to attempt recovery: On Linux, -XX:OnOutOfMemoryError="kill -9 %p"; on Windows, -XX:OnOutOfMemoryError="taskkill /F /PID %p".

  • Capture a heap dump if the JVM experiences an out of memory error: -XX:+HeapDumpOnOutOfMemoryError.

Plan to Test Mixed JVM Environments

Coherence is pure Java software and can run in clusters composed of any combination of JVM vendors and versions and Oracle tests such configurations.

Note that it is possible for different JVMs to have slightly different serialization formats for Java objects, meaning that it is possible for an incompatibility to exist when objects are serialized by one JVM, passed over the wire, and a different JVM (vendor, version, or both) attempts to deserialize it. Fortunately, the Java serialization format has been very stable for several years, so this type of issue is extremely unlikely. However, it is highly recommended to test mixed configurations for consistent serialization before deploying in a production environment.

See also:

Oracle Exalogic Elastic Cloud Recommendations

Configure Coherence accordingly when using Oracle Exalogic Elastic Cloud software. Oracle Exalogic and the Oracle Exalogic Elastic Cloud software provide a foundation for extreme performance, reliability, and scalability. Coherence has been optimized to take advantage of this foundation especially in its use of Oracle Exabus technology. Exabus consists of unique hardware, software, firmware, device drivers, and management tools and is built on Oracle's Quad Data Rate (QDR) InfiniBand technology. Exabus forms the high-speed communication (I/O) fabric that ties all Oracle Exalogic system components together.

Oracle Coherence includes the following optimizations:

  • Transport optimizations

    Oracle Coherence uses the Oracle Exabus messaging API for message transport. The API is optimized on Exalogic to take advantage of InfiniBand. The API is part of the Oracle Exalogic Elastic Cloud software and is only available on Oracle Exalogic systems.

    In particular, Oracle Coherence uses the InfiniBand Message Bus (IMB) provider. IMB uses a native InfiniBand protocol that supports zero message copy, kernel bypass, predictive notifications, and custom off-heap buffers. The result is decreased host processor load, increased message throughput, decreased interrupts, and decreased garbage collection pauses.

    The default Coherence setup on Oracle Exalogic uses IMB for service communication (transferring data) and for cluster communication. Both defaults can be changed and additional protocols are supported. See Changing the Reliable Transport Protocol.

  • Elastic data optimizations

    The Elastic Data feature is used to store backing map and backup data seamlessly across RAM memory and devices such as Solid State Disks (SSD). The feature enables near memory speed while storing and reading data from SSDs. The feature includes dynamic tuning to ensure the most efficient use of SSD memory on Exalogic systems. See Using the Elastic Data Feature to Store Data in Developing Applications with Oracle Coherence.

  • Coherence*Web optimizations

    Coherence*Web naturally benefits on Exalogic systems because of the increased performance of the network between WebLogic Servers and Coherence servers. Enhancements also include less network usage and better performance by enabling optimized session state management when locking is disabled (coherence.session.optimizeModifiedSessions=true). See Coherence*Web Context Parameters in Administering HTTP Session Management with Oracle Coherence*Web.

Consider Using Fewer JVMs with Larger Heaps

The IMB protocol requires more CPU usage (especially at lower loads) to achieve lower latencies. If you are using many JVMs, JVMs with smaller heaps (under 12GB), or many JVMs and smaller heaps, then consider consolidating the JVMs as much as possible. Large heap sizes up to 20GB are common and larger heaps can be used depending on the application and its tolerance to garbage collection. See JVM Tuning.

Changing the Reliable Transport Protocol

On Oracle Exalogic, Coherence automatically selects the best reliable transport available for the environment. The default Coherence setup uses the InfiniBand Message Bus (IMB) for service communication (transferring data) and for cluster communication unless SSL is enabled, in which case SDMBS is used. You can use a different transport protocol and check for improved performance. However, you should only consider changing the protocol after following the previous recommendations in this section.

Note:

The only time the default transport protocol may need to be explicitly set is in a Solaris Super Cluster environment. The recommended transport protocol is SDMB or (if supported by the environment) IMB.

The following transport protocols are available on Exalogic:

  • datagram – Specifies the use of UDP.

  • tmb – Specifies the TCP Message Bus (TMB) protocol. TMB provides support for TCP/IP.

  • tmbs – TCP/IP message bus protocol with SSL support. TMBS requires the use of an SSL socket provider. See socket-provider in Developing Applications with Oracle Coherence.

  • sdmb – Specifies the Sockets Direct Protocol Message Bus (SDMB). The Sockets Direct Protocol (SDP) provides support for stream connections over the InfiniBand fabric. SDP allows existing socket-based implementations to transparently use InfiniBand.

    Note:

    When running with JDK11 or higher, specifying sdmb protocol will result in a log message stating SDP classes are unavailable, and the client or cache server will not start.
  • sdmbs – SDP message bus with SSL support. SDMBS requires the use of an SSL socket provider.

    Note:

    When running with JDK11 or higher, specifying sdmbs protocol will result in a log message stating SDP classes are unavailable, and the client or cache server will not start.
  • imb – InfiniBand message bus (IMB). IMB is automatically used on Exalogic systems as long as TCMP has not been configured with SSL.

    Note:

    imb protocol is removed as of release 12.2.1.4.0. If imb protocol is specified, it is mapped to the tmb protocol.

To configure a reliable transport for all cluster (unicast) communication, edit the operational override file and within the <unicast-listener> element add a <reliable-transport> element that is set to a protocol:

Note:

By default, all services use the configured protocol and share a single transport instance. In general, a shared transport instance uses less resources than a service-specific transport instance.
<?xml version="1.0"?>
<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config
   coherence-operational-config.xsd">
   <cluster-config>
      <unicast-listener>
         <reliable-transport 
            system-property="coherence.transport.reliable">imb
         </reliable-transport>
      </unicast-listener>
   </cluster-config>
</coherence>

The coherence.transport.reliable system property also configures the reliable transport. For example:

-Dcoherence.transport.reliable=imb

To configure reliable transport for a service, edit the cache configuration file and within a scheme definition add a <reliable-transport> element that is set to a protocol. The following example demonstrates setting the reliable transport for a partitioned cache service instance called ExampleService:

Note:

Specifying a reliable transport for a service results in the use of a service-specific transport instance rather then the shared transport instance that is defined by the <unicast-listener> element. A service-specific transport instance can result in higher performance but at the cost of increased resource consumption and should be used sparingly for select, high priority services. In general, a shared transport instance uses less resource consumption than service-specific transport instances.
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
   coherence-cache-config.xsd">

   <caching-scheme-mapping>
      <cache-mapping>
         <cache-name>example</cache-name>
         <scheme-name>distributed</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>
   
   <caching-schemes>
      <distributed-scheme>
         <scheme-name>distributed</scheme-name>
         <service-name>ExampleService</service-name>
         <reliable-transport>imb</reliable-transport>
         <backing-map-scheme>
           <local-scheme/>
         </backing-map-scheme>
         <autostart>true</autostart>
      </distributed-scheme>
   </caching-schemes>
</cache-config>

Each service type also has a system property that sets the reliable transport, respectively. The system property sets the reliable transport for all instances of a service type. The system properties are:

  • coherence.distributed.transport.reliable
  • coherence.replicated.transport.reliable
  • coherence.optimistic.transport.reliable
  • coherence.invocation.transport.reliable
  • coherence.proxy.transport.reliable

Security Recommendations

Ensure security has been configured properly.

Ensure Security Privileges

The minimum set of privileges required for Coherence to function are specified in the security.policy file which is included as part of the Coherence installation. This file can be found in coherence/lib/security/security.policy. If using the Java Security Manager, these privileges must be granted in order for Coherence to function properly.

Plan for SSL Requirements

Coherence-based applications may chose to implement varying levels of security as required, including SSL-based security between cluster members and between Coherence*Extend clients and the cluster. If SSL is a requirement, ensure that all servers have a digital certificate that has been verified and signed by a trusted certificate authority and that the digital certificate is imported into the servers' key store and trust store as required. Coherence*Extend clients must include a trust key store that contains the certificate authority's digital certificate that was used to sign the proxy's digital certificate. See Using SSL to Secure Communication in Securing Oracle Coherence .

Persistence Recommendations

Follow persistence best practices.

Plan for SAN/NFS Persistence Storage

Persisting caches to a remote or shared disk may require additional planning and configuration of the underlying persistence store. The persistence store currently used by Coherence is Oracle Berkeley DB (BDB) Java Edition (JE). General recommendations are provided in the FAQ entry: Can Berkeley DB Java Edition use a NFS, SAN, or other remote/shared/network filesystem for an environment?.

Note:

The current persistence store implementation may change in the future.

The documentation lists a number of issues and recommendations related to the use of remote file systems. You should evaluate those recommendations to help avoid database corruption. In particular, one of the issues relates to having multiple clients (cluster members in the case of Coherence) pointing at the same remote filesystem (BDB JE Environment). The documentation indicates that this should never be done due to issues with faulty remote implementations of flock(). Coherence persistence maintains a distinct BDB JE Environment for each persisted partition and helps addresses the flock issue by using Coherence clustering to enforce that each of the BDB JE Environments is only ever accessed by single cluster members at a given time, that is, by the partition owner. The caveat is that if a cluster encounters a split brain condition, then there is temporarily multiple clusters and multiple logical owners for each partition trying to access the same BDB JE Environment. When using a remote file system, the best practice is to either configure Coherence to prevent split brain by using a Cluster Quorum, ensure that your remote file system properly supports flock(), or point each cluster storage member at a different remote directory.

Application Instrumentation Recommendations

Some Java-based management and monitoring solutions use instrumentation (for example, bytecode-manipulation and ClassLoader substitution). Oracle has observed issues with such solutions in the past. Use application instrumentation solutions cautiously even though there are no current issues reported with the major vendors.

Coherence Modes and Editions

Verify that Coherence is configured to run in production mode and is using the correct edition settings.

Select the Production Mode

Coherence may be configured to operate in either evaluation, development, or production mode. These modes do not limit access to features, but instead alter some default configuration settings. For instance, development mode allows for faster cluster startup to ease the development process.

The development mode is used for all pre-production activities, such as development and testing. This is an important safety feature because development nodes are restricted from joining with production nodes. Development mode is the default mode. Production mode must be explicitly specified when using Coherence in a production environment. To change the mode to production mode, edit the tangosol-coherence.xml (located in coherence.jar) and enter prod as the value for the <license-mode> element. For example:

...
<license-config>
   ...
   <license-mode system-property="coherence.mode">prod</license-mode>
</license-config>
...

The coherence.mode system property is used to specify the license mode instead of using the operational deployment descriptor. For example:

-Dcoherence.mode=prod

In addition to preventing mixed mode clustering, the license-mode also dictates the operational override file to use. When in eval mode the tangosol-coherence-override-eval.xml file is used; when in dev mode the tangosol-coherence-override-dev.xml file is used; whereas, the tangosol-coherence-override-prod.xml file is used when the prod mode is specified. A tangosol-coherence-override.xml file (if it is included in the classpath before the coherence.jar file) is used no matter which mode is selected and overrides any mode-specific override files.

Select the Edition

Note:

The edition switches no longer enforce license restrictions. Do not change the default setting (GE).

All nodes within a cluster must use the same license edition and mode. The default edition is grid edition (GE). Be sure to obtain enough licenses for the all the cluster members in the production environment. The servers hardware configuration (number or type of processor sockets, processor packages, or CPU cores) may be verified using ProcessorInfo utility included with Coherence. For example:

java -cp coherence.jar com.tangosol.license.ProcessorInfo

If the result of the ProcessorInfo program differs from the licensed configuration, send the program's output and the actual configuration as a support issue.

Note:

Clusters that run different editions may connect by using Coherence*Extend as a Data Client.

Ensuring that RTC Nodes do Not Use Coherence TCMP

Real-Time client nodes can connect to clusters using either Coherence TCMP or Coherence*Extend. If the intention is to use extend clients, disable TCMP on the client to ensure that it only connects to a cluster using Coherence*Extend. Otherwise, The client may become a member of the cluster. See Disabling TCMP Communication in Developing Remote Clients for Oracle Coherence.

Coherence Operational Configuration Recommendations

Verify that the operational configuration file is setup correctly.
Operational configuration relates to cluster-level configuration that is defined in the tangosol-coherence.xml file and includes such items as:
  • Cluster and cluster member settings

  • Network settings

  • Management settings

  • Security settings

Coherence Operational aspects are typically configured by using a tangosol-coherence-override.xml file. See Specifying an Operational Configuration File in Developing Applications with Oracle Coherence.

The contents of this file often differs between development and production. It is recommended that these variants be maintained independently due to the significant differences between these environments. The production operational configuration file should be maintained by systems administrators who are far more familiar with the workings of the production systems.

All cluster nodes should use the same operational configuration override file and any node-specific values should be specified by using system properties. See System Property Overrides in Developing Applications with Oracle Coherence. A centralized configuration file may be maintained and accessed by specifying a URL as the value of the coherence.override system property on each cluster node. For example:

-Dcoherence.override=/net/mylocation/tangosol-coherence-override.xml

The override file need only contain the operational elements that are being changed. In addition, always include the id and system-property attributes if they are defined for an element.

Coherence Cache Configuration Recommendations

Verify that the cache configuration file is setup correctly.
Cache configuration relates to cache-level configuration and includes such things as:
  • Cache topology (<distributed-scheme>, <near-scheme>, and so on)

  • Cache capacities (<high-units>)

  • Cache redundancy level (<backup-count>)

Coherence cache configuration aspects are typically configured by using a coherence-cache-config.xml file. See Specifying a Cache Configuration File in Developing Applications with Oracle Coherence.

The default coherence-cache-config.xml file included within coherence.jar is intended only as an example and is not suitable for production use. Always use a cache configuration file with definitions that are specific to the application.

All cluster nodes should use the same cache configuration descriptor if possible. A centralized configuration file may be maintained and accessed by specifying a URL as the value the coherence.cacheconfig system property on each cluster node. For example:

-Dcoherence.cacheconfig=/net/mylocation/coherence-cache-config.xml

Caches can be categorized as either partial or complete. In the former case, the application does not rely on having the entire data set in memory (even if it expects that to be the case). Most caches that use cache loaders or that use a side cache pattern are partial caches. Complete caches require the entire data set to be in cache for the application to work correctly (most commonly because the application is issuing non-primary-key queries against the cache). Caches that are partial should always have a size limit based on the allocated JVM heap size. The limits protect an application from OutOfMemoryExceptions errors. Set the limits even if the cache is not expected to be fully loaded to protect against changing expectations. See JVM Tuning. Conversely, if a size limit is set for a complete cache, it may cause incorrect results.

It is important to note that when multiple cache schemes are defined for the same cache service name, the first to be loaded dictates the service level parameters. Specifically the <partition-count>, <backup-count>, and <thread-count> subelements of <distributed-scheme> are shared by all caches of the same service. It is recommended that a single service be defined and inherited by the various cache-schemes. If you want different values for these items on a cache by cache basis then multiple services may be configured.

For partitioned caches, Coherence evenly distributes the storage responsibilities to all cache servers, regardless of their cache configuration or heap size. For this reason, it is recommended that all cache server processes be configured with the same heap size. For computers with additional resources multiple cache servers may be used to effectively make use of the computer's resources.

To ensure even storage responsibility across a partitioned cache the <partition-count> subelement of the <distributed-scheme> element, should be set to a prime number which is at least the square of the number of expected cache servers.

A clustered service can perform all tasks on the service thread, a caller's thread if possible, and any number of daemon (worker) threads managed by a dynamic thread pool. The dynamic thread pool is automatically enabled for these services. You can use <thread-count-min> and <thread-count-max> to control the minimum and maximum number of threads in a dynamic thread pool. By default, the value of <thread-count-min> is 1 and <thead-count-max> is Integer.MAX_VALUE. The dynamic thread pool is started with the number of threads specified by <thread-count-min>.

For caches which are backed by a cache store, Oracle recommends configuring the parent service with a thread pool of <thread-count-min> greater than 1 as requests to the cache store may block on I/O. Such thread pools are also recommended for caches that perform CPU-intensive operations on the cache server (queries, aggregations, some entry processors, and so on). For non-CacheStore-based caches, more threads are unlikely to improve performance. Therefore, you may leave the <thread-count-min> at its default value of 1.

Unless explicitly specified, all cluster nodes are storage enabled, that is, they act as cache servers. It is important to control which nodes in your production environment are storage enabled and storage disabled. The coherence.distributed.localstorage system property may be used to control storage, setting it to either true or false. Generally, only dedicated cache servers (including proxy servers) should have storage enabled. All other cluster nodes should be configured as storage disabled. This is especially important for short lived processes which may join the cluster to perform some work and then exit the cluster. Having these nodes as storage enabled introduces unneeded re-partitioning.

Large Cluster Configuration Recommendations

Configure Coherence accordingly when deploying a large cluster.
  • Distributed caches on large clusters of more than 16 cache servers require more partitions to ensure optimal performance. The default partition count is 257 and should be increased relative to the number of cache servers in the cluster and the amount of data being stored in each partition. See Changing the Number of Partitions in Developing Applications with Oracle Coherence.

  • The maximum packet size on large clusters of more than 400 cluster members must be increased to ensure better performance. The default of 1468 should be increased relative to the size of the cluster, that is, a 600 node cluster would need the maximum packet size increased by 50%. A simple formula is to allow four bytes per node, that is, maximum_packet_size >= maximum_cluster_size * 4B. The maximum packet size is configured as part of the coherence operational configuration file. See Adjusting the Maximum Size of a Packet in Developing Applications with Oracle Coherence.

  • Multicast communication, if supported by the network, can be used instead of point-to-point communication for cluster discovery. This is an ease-of-use recommendation and is not a requirement for large clusters. Multicast is enabled in an operational configuration file. See Configuring Multicast Communication in Developing Applications with Oracle Coherence.

Death Detection Recommendations

Test scenarios that include node failure and configure Coherence accordingly. The Coherence death detection algorithms are based on sustained loss of connectivity between two or more cluster nodes.

When a node identifies that it has lost connectivity with any other node, it consults with other cluster nodes to determine what action should be taken. In attempting to consult with others, the node may find that it cannot communicate with any other nodes and assumes that it has been disconnected from the cluster. Such a condition could be triggered by physically unplugging a node's network adapter. In such an event, the isolated node restarts its clustered services and attempts to rejoin the cluster.

If connectivity with other cluster nodes remains unavailable, the node may (depending on well known address configuration) form a new isolated cluster, or continue searching for the larger cluster. In either case, the previously isolated cluster nodes rejoins the running cluster when connectivity is restored. As part of rejoining the cluster, the nodes former cluster state is discarded, including any cache data it may have held, as the remainder of the cluster has taken on ownership of that data (restoring from backups).

It is obviously not possible for a node to identify the state of other nodes without connectivity. To a single node, local network adapter failure and network wide switch failure looks identical and are handled in the same way, as described above. The important difference is that for a switch failure all nodes are attempting to re-join the cluster, which is the equivalent of a full cluster restart, and all prior state and data is dropped.

Dropping all data is not desirable and, to avoid this as part of a sustained switch failure, you must take additional precautions. Options include:

  • Increase detection intervals: The cluster relies on a deterministic process-level death detection using the TcpRing component and hardware death detection using the IpMonitor component. Process-level detection is performed within milliseconds and network or machine failures are detected within 15 seconds by default. Increasing these value allows the cluster to wait longer for connectivity to return. Death detection is enabled by default and is configured within the <tcp-ring-listener> element. See Configuring Death Detection in Developing Applications with Oracle Coherence.

  • Persist data to external storage: By using a Read Write Backing Map, the cluster persists data to external storage, and can retrieve it after a cluster restart. So long as write-behind is disabled (the <write-delay> subelement of <read-write-backing-map-scheme>) no data would be lost if a switch fails. The downside here is that synchronously writing through to external storage increases the latency of cache update operations, and the external storage may become a bottleneck.

  • Decide on a cluster quorum: The cluster quorum policy mandates the minimum number of cluster members that must remain in the cluster when the cluster service is terminating suspect members. During intermittent network outages, a high number of cluster members may be removed from the cluster. Using a cluster quorum, a certain number of members are maintained during the outage and are available when the network recovers. See Using the Cluster Quorum in Developing Applications with Oracle Coherence.

    Note:

    To ensure that Windows does not disable a network adapter when it is disconnected, add the following Windows registry DWORD and set it to 1:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DisableDHCPMediaSense. See How to disable the Media Sensing feature for TCP/IP in Windows. This setting also affects static IPs despite the name.

  • Add network level fault tolerance: Adding a redundant layer to the cluster's network infrastructure allows for individual pieces of networking equipment to fail without disrupting connectivity. This is commonly achieved by using at least two network adapters per computer, and having each adapter connected to a separate switch. This is not a feature of Coherence but rather of the underlying operating system or network driver. The only change to Coherence is that it should be configured to bind to the virtual rather then physical network adapter. This form of network redundancy goes by different names depending on the operating system: Linux bonding, Solaris trunking and Windows teaming.