Production Checklist

Overview

Warning
Deploying Coherence in a production environment is very different from using Coherence in a development environment.

Development environments do not reflect the challenges of a production environment.

Coherence tends to be so simple to use in development that developers do not take the necessary planning steps and precautions when moving an application using Coherence into production. This article is intended to accomplish the following:

Deployment recommendations are available for:

Network

During development, a Coherence-enabled application on a developer's local machine can accidentally form a cluster with the application running on other developers' machines.

Developers often use and test Coherence locally on their workstations. There are several ways in which they may accomplish this, including: By setting the multicast TTL to zero, by disconnecting from the network (and using the "loopback" interface), or by each developer using a different multi-cast address and port from all other developers. If one of these approaches is not used, then multiple developers on the same network will find that Coherence has clustered across different developers' locally running instances of the application; in fact, this happens relatively often and causes confusion when it is not understood by the developers.

Setting the TTL to zero on the command line is very simple: Add the following to the JVM startup parameters:

-Dtangosol.coherence.ttl=0

Starting with Coherence version 3.2, setting the TTL to zero for all developers is also very simple. Edit the tangosol-coherence-override-dev.xml in the coherence.jar file, changing the TTL setting as follows:

<time-to-live system-property="tangosol.coherence.ttl">0</time-to-live>

On some UNIX OSs, including some versions of Linux and Mac OS X, setting the TTL to zero may not be enough to isolate a cluster to a single machine. To be safe, assign a different cluster name for each developer, for example using the developer's email address as the cluster name. If the cluster communication does go across the network to other developer machines, then the different cluster name will cause an error on the node that is attempting to start up.

To ensure that the clusters are completely isolated, select a different multicast IP address and port for each developer. In some organizations, a simple approach is to use the developer's phone extension number as part of the multicast address and as the port number (or some part of it). For information on configuring the multicast IP address and port, see the section on multicast-listener configuration.

During development, clustered functionality is often not being tested.

After the POC or prototype stage is complete, and until load testing begins, it is not out of the ordinary for the application to be developed and tested by engineers in a non-clustered form. This is dangerous, as testing primarily in the non-clustered configuration can hide problems with the application architecture and implementation that will show up later in staging – or even production.

Make sure that the application is being tested in a clustered configuration as development proceeds. There are several ways for clustered testing to be a natural part of the development process; for example:

  • Developers can test with a locally clustered configuration (at least two instances running on their own machine). This works well with the TTL=0 setting, since clustering on a single machine works with the TTL=0 setting.
  • Unit and regression tests can be introduced that run in a test environment that is clustered. This may help automate certain types of clustered testing that an individual developer would not always remember (or have the time) to do.

What is the type and speed of the production network?

Most production networks are based on gigabit Ethernet, with a few still built on slower 100Mb Ethernet or faster ten-gigabit Ethernet. It is important to understand the topology of the production network, and what the full set of devices that will connect all of the servers that will be running Coherence. For example, if there are ten different switches being used to connect the servers, are they all the same type (make and model) of switch? Are they all the same speed? Do the servers support the network speeds that are available?

In general, all servers should share a reliable, fully switched network – this generally implies sharing a single switch (ideally, two parallel switches and two network cards per server for availability). There are two primary reasons for this. The first is that using more than one switch almost always results in a reduction in effective network capacity. The second is that multi-switch environments are more likely to have network "partitioning" events where a partial network failure will result in two or more disconnected sets of servers. While partitioning events are rare, Coherence cache servers ideally should share a common switch.

To demonstrate the impact of multiple switches on bandwidth, consider a number of servers plugged into a single switch. As additional servers are added, each server receives dedicated bandwidth from the switch backplane. For example, on a fully switched gigabit backplane, each server receives a gigabit of inbound bandwidth and a gigabit of outbound bandwidth for a total of 2Gbps "full duplex" bandwidth. Four servers would have an aggregate of 8Gbps bandwidth. Eight servers would have an aggregate of 16Gbps. And so on up to the limit of the switch (in practice, usually in the range of 160-192Gbps for a gigabit switch). However, consider the case of two switches connected by a 4Gbps (8Gbps full duplex) link. In this case, as servers are added to each switch, they will have full "mesh" bandwidth up to a limit of four servers on each switch (e.g all four servers on one switch can communicate at full speed with the four servers on the other switch). However, adding additional servers will potentially create a bottleneck on the inter-switch link. For example, if five servers on one switch send data to five servers on the other switch at 1Gbps per server, then the combined 5Gbps will be restricted by the 4Gbps link. Note that the actual limit may be much higher depending on the traffic-per-server and also the portion of traffic that actually needs to move across the link. Also note that other factors such as network protocol overhead and uneven traffic patterns may make the usable limit much lower from an application perspective.

Avoid mixing and matching network speeds: Make sure that all servers can and do connect to the network at the same speed, and that all of the switches and routers between those servers are running at that same speed or faster.

Oracle strongly suggests GigE or faster: Gigabit Ethernet is supported by most servers built since 2004, and Gigabit switches are economical, available and widely deployed.

Before deploying an application, you must run the Datagram Test to test the actual network speed and determine its capability for pushing large amounts of data. Furthermore, the Datagram test must be run with an increasing ratio of publishers to consumers, since a network that appears fine with a single publisher and a single consumer may completely fall apart as the number of publishers increases, such as occurs with the default configuration of Cisco 6500 series switches.

Will the production deployment use multicast?

The term "multicast" refers to the ability to send a packet of information from one server and to have that packet delivered in parallel by the network to many servers. Coherence supports both multicast and multicast-free clustering. Oracle suggests the use of multicast when possible because it is an efficient option for many servers to communicate. However, there are several common reasons why multicast cannot be used:

  • Some organizations disallow the use of multicast.
  • Multicast cannot operate over certain types of network equipment; for example, many WAN routers disallow or do not support multicast traffic.
  • Multicast is occasionally unavailable for technical reasons; for example, some switches do not support multicast traffic.

First determine if multicast will be used. In other words, determine if the desired deployment configuration is to use multicast.

Before deploying an application that will use multicast, you must run the Multicast Test to verify that multicast is working and to determine the correct (the minimum) TTL value for the production environment.

Applications that cannot use multicast for deployment must use the WKA configuration. For more information, see the Network Protocols documentation.

Are your network devices configured optimally?

If the above datagram and/or multicast tests have failed or returned poor results, it is possible that there are configuration problems with the network devices in use. Even if the tests passed without incident and the results were perfect, it is still possible that there are lurking issues with the configuration of the network devices.

Review the suggestions in the Network Infrastructure Settings section of the Performance Tuning documentation.

How will the cluster handle a sustained network outage?

The Coherence cluster protocol is capable of detecting and handling a wide variety of connectivity failures. The clustered services are able to identify the connectivity issue, and force the offending cluster node to leave and re-join the cluster. In this way the cluster ensures a consistent shared state amongst its members.

See the section on death detection for more details.

See also:

Hardware

During development, developers can form unrealistic performance expectations.

Most developers have relatively fast workstations. Combined with test cases that are typically non-clustered and tend to represent single-user access (i.e. only the developer), the application may seem extraordinarily responsive.

Include as a requirement that realistic load tests be built that can be run with simulated concurrent user load.

Test routinely in a clustered configuration with simulated concurrent user load.

During development, developer productivity can be adversely affected by inadequate hardware resources, and certain types of quality can also be affected negatively.

Coherence is compatible with all common workstation hardware. Most developers use PC or Apple hardware, including notebooks, desktops and workstations.

Developer systems should have a significant amount of RAM to run a modern IDE, debugger, application server, database and at least two cluster instances. Memory utilization varies widely, but to ensure productivity, the suggested minimum memory configuration for developer systems is 2GB. Desktop systems and workstations can often be configured with 4GB for minimal additional cost.

Developer systems should have two CPU cores or more. Although this will have the likely side-effect of making developers happier, the actual purpose is to increase the quality of code related to multi-threading, since many bugs related to concurrent execution of multiple threads will only show up on multi-CPU systems (systems that contain multiple processor sockets and/or CPU cores).

What are the supported and suggested server hardware platforms for deploying Coherence on?

The short answer is that Oracle works to support the hardware that the customer has standardized on or otherwise selected for production deployment.

  • Oracle has customers running on virtually all major server hardware platforms. The majority of customers use "commodity x86" servers, with a significant number deploying Sun Sparc (including Niagra) and IBM Power servers.
  • Oracle continually tests Coherence on "commodity x86" servers, both Intel and AMD.
  • Intel, Apple and IBM provide hardware, tuning assistance and testing support to Oracle.
  • Oracle conducts internal Coherence certification on all IBM server platforms at least once a year.
  • Oracle and Azul test Coherence regularly on Azul appliances, including the newly-announced 48-core "Vega 2" chip.

If the server hardware purchase is still in the future, the following are suggested for Coherence (as of December 2006):

The most cost-effective server hardware platform is "commodity x86", either Intel or AMD, with one to two processor sockets and two to four CPU cores per processor socket. If selecting an AMD Opteron system, it is strongly recommended that it be a two processor socket system, since memory capacity is usually halved in a single socket system. Intel "Woodcrest" and "Clovertown" Xeons are strongly recommended over the previous Intel Xeon CPUs due to significantly improved 64-bit support, much lower power consumption, much lower heat emission and far better performance. These new Xeons are currently the fastest commodity x86 CPUs, and can support a large memory capacity per server regardless of the processor socket count by using fully bufferred memory called "FB-DIMMs".

It is strongly recommended that servers be configured with a minimum of 4GB of RAM. For applications that plan to store massive amounts of data in memory – tens or hundreds of gigabytes, or more – it is recommended to evaluate the cost-effectiveness of 16GB or even 32GB of RAM per server. As of December, 2006, commodity x86 server RAM is readily available in a density of 2GB per DIMM, with higher densities available from only a few vendors and carrying a large price premium; this means that a server with 8 memory slots will only support 16GB in a cost-effective manner. Also note that a server with a very large amount of RAM will likely need to run more Coherence nodes (JVMs) per server in order to utilize that much memory, so having a larger number of CPU cores will help. Applications that are "data heavy" will require a higher ratio of RAM to CPU, while applications that are "processing heavy" will require a lower ratio. For example, it may be sufficient to have two dual-core Xeon CPUs in a 32GB server running 15 Coherence "Cache Server" nodes performing mostly identity-based operations (cache accesses and updates), but if an application makes frequent use of Coherence features such as indexing, parallel queries, entry processors and parallel aggregation, then it will be more effective to have two quad-core Xeon CPUs in a 16GB server – a 4:1 increase in the CPU:RAM ratio.

A minimum of 1000Mbps for networking (e.g. Gigabit Ethernet or better) is strongly recommended. NICs should be on a high bandwidth bus such as PCI-X or PCIe, and not on standard PCI. In the case of PCI-X having the NIC on an isolated or otherwise lightly loaded 133MHz bus may significantly improve performance.

How many servers are optimal?

Coherence is primarily a scale-out technology. While Coherence can effectively scale-up on large servers by using multiple JVMs per server, the natural mode of operation is to span a number of small servers (e.g. 2-socket or 4-socket commodity servers). Specifically, failover and failback are more efficient in larger configurations. And the impact of a server failure is lessened. As a rule of thumb, a cluster should contain at least four physical servers. In most WAN configurations, each data center will have independent clusters (usually interconnected by Extend-TCP). This will increase the total number of discrete servers (four servers per data center, multiplied by the number of data centers).

Coherence is quite often deployed on smaller clusters (one, two or three physical servers) but this practice has increased risk if a server failure occurs under heavy load. As discussed in the network section of this document, Coherence clusters are ideally confined to a single switch (e.g. fewer than 96 physical servers). In some use cases, applications that are compute-bound or memory-bound applications (as opposed to network-bound) may run acceptably on larger clusters.

Also note that given the choice between a few large JVMs and a lot of small JVMs, the latter may be the better option. There are several production environments of Coherence that span hundreds of JVMs. Some care is required to properly prepare for clusters of this size, but smaller clusters of dozens of JVMs are readily achieved. Please note that disabling UDP multicast (via WKA) or running on slower networks (e.g. 100Mbps ethernet) will reduce network efficiency and make scaling more difficult.

Does it matter how JVMs are distributed among servers?

The following rules should be followed in determining how many servers are required for reliable high availability configuration and how to configure the number of storage-enabled JVMs.

  1. There must be more than two servers. A grid with only two servers stops being machine-safe as soon as a number of JVMs on one server is not the same as the number of JVMs on the other server, so even if we start with two servers with equal number of JVMs, losing one JVM will force the grid out of machine-safe state. Four or more machines present the most stable topology, but deploying on just three servers would work if the other rules are adhered to.
  2. For a server that has the largest number of JVMs in the cluster, that number of JVMs must not exceed the total number of JVMs on all the other servers in the cluster.
  3. A server with the smallest number of JVMs should run at least half of the number of JVMs as a server with the largest number of JVMs; this rule is particularly important for smaller clusters.
  4. The margin of safety improves as the number of JVMs tends toward equality on all machines in the cluster; this is more of a "rule of thumb" than the preceding "hard" rules.

See also:

Operating System

During development, developers typically use a different operating system than the one that the application will be deployed to.

The top three operating systems for application development using Coherence are, in this order: Windows 2000/XP (~85%), Mac OS X (~10%) and Linux (~5%). The top four operating systems for production deployment are, in this order: Linux, Solaris, AIX and Windows. Thus, it is relatively unlikely that the development and deployment operating system will be the same.

Make sure that regular testing is occurring on the target operating system.

What are the supported and suggested server operating systems for deploying Coherence on?

Oracle tests on and supports various Linux distributions (including customers that have custom Linux builds), Sun Solaris, IBM AIX, Windows Vista/2003/2000/XP, Apple Mac OS X, OS/400 and z/OS. Additionally, Oracle supports customers running HP-UX and various BSD UNIX distributions.

If the server operating system decision is still in the future, the following are suggested for Coherence (as of December 2006):

For commodity x86 servers, Linux distributions based on the Linux 2.6 kernel are recommended. While it is expected that most 2.6-based Linux distributions will provide a good environment for running Coherence, the following are recommended by Oracle: RedHat Enterprise Linux (version 4 or later) and Suse Linux Enterprise (version 10 or later). Oracle also routinely tests using distributions such as RedHat Fedora Core 5 and even Knoppix "Live CD".

Review and follow the instructions in the "Deployment Considerations" document (from the list of links below) for the operating system that Coherence will be deployed on.

Avoid using virtual memory (paging to disk).

In a Coherence-based application, primary data management responsibilities (e.g. Dedicated Cache Servers) are hosted by Java-based processes. Modern Java distributions do not work well with virtual memory. In particular, garbage collection (GC) operations may slow down by several orders of magnitude if memory is paged to disk. With modern commodity hardware and a modern JVM, a Java process with a reasonable heap size (512MB-2GB) will typically perform a full garbage collection in a few seconds if all of the process memory is in RAM. However, this may grow to many minutes if the JVM is partially resident on disk. During garbage collection, the node will appear unresponsive for an extended period of time, and the choice for the rest of the cluster is to either wait for the node (blocking a portion of application activity for a corresponding amount of time), or to mark the unresponsive node as "failed" and perform failover processing. Neither of these is a good option, and so it is important to avoid excessive pauses due to garbage collection. JVMs should be pinned into physical RAM, or at least configured so that the JVM will not be paged to disk.

Note that periodic processes (such as daily backup programs) may cause memory usage spikes that could cause Coherence JVMs to be paged to disk.

See also:

JVM

During development, developers typically use the latest Sun JVM or a direct derivative such as the Mac OS X JVM.

The main issues related to using a different JVM in production are:

  • Command line differences, which may expose problems in shell scripts and batch files;
  • Logging and monitoring differences, which may mean that tools used to analyze logs and monitor live JVMs during development testing may not be available in production;
  • Significant differences in optimal GC configuration and approaches to GC tuning;
  • Differing behaviors in thread scheduling, garbage collection behavior and performance, and the performance of running code.

Make sure that regular testing is occurring on the JVM that will be used in production.

Which JVM configuration options should be used?

JVM configuration options vary over versions and between vendors, but the following are generally suggested:

  • Using the -server option will result in substantially better performance.
  • Using identical heap size values for both -Xms and -Xmx will yield substantially better performance, as well as "fail fast" memory allocation.
  • For naive tuning, a heap size of 512MB is a good compromise that balances per-JVM overhead and garbage collection performance.
    • Larger heap sizes are allowed and commonly used, but may require tuning to keep garbage collection pauses manageable.

What are the supported and suggested JVMs for deploying Coherence on?

In terms of Oracle Coherence versions:

  • Coherence 3.x versions are supported on the Sun JDK versions 1.4 and 1.5, and JVMs corresponding to those versions of the Sun JDK. Starting with Coherence 3.3 the 1.6 JVMs are also supported.
  • Coherence version 2.x (currently at the 2.5.1 release level) is supported on the Sun JDK versions 1.2, 1.3, 1.4 and 1.5, and JVMs corresponding to those versions of the Sun JDK.

Often the choice of JVM is dictated by other software. For example:

  • IBM only supports IBM WebSphere running on IBM JVMs. Most of the time, this is the IBM "Sovereign" or "J9" JVM, but when WebSphere runs on Sun Solaris/Sparc, IBM builds a JVM using the Sun JVM source code instead of its own.
  • BEA WebLogic typically includes a JVM which is intended to be used with it. On some platforms, this is the BEA WebLogic JRockit JVM.
  • Apple Mac OS X, HP-UX, IBM AIX and other operating systems only have one JVM vendor (Apple, HP and IBM respectively).
  • Certain software libraries and frameworks have minimum Java version requirements because they take advantage of relatively new Java features.

On commodity x86 servers running Linux or Windows, the Sun JVM is recommended. Generally speaking, the recent update versions are recommended. For example:

  • Oracle recommends testing and deploying using the latest supported Sun JVM based on your platform and Coherence version.

Basically, at some point before going to production, a JVM vendor and version should be selected and well tested, and absent any flaws appearing during testing and staging with that JVM, that should be the JVM that is used when going to production. For applications requiring continuous availability, a long-duration application load test (e.g. at least two weeks) should be run with that JVM before signing off on it.

Review and follow the instructions in the "Deployment Considerations" document (from the list of links below) for the JVM that Coherence will be deployed on.

Must all nodes run the same JVM vendor and version?

No. Coherence is pure Java software and can run in clusters composed of any combination of JVM vendors and versions, and Oracle tests such configurations.

Note that it is possible for different JVMs to have slightly different serialization formats for Java objects, meaning that it is possible for an incompatibility to exist when objects are serialized by one JVM, passed over the wire, and a different JVM (vendor and/or version) attempts to deserialize it. Fortunately, the Java serialization format has been very stable for a number of years, so this type of issue is extremely unlikely. However, it is highly recommended to test mixed configurations for conistent serialization prior to deploying in a production environment.

See also:

Java Security Manager

The minimum set of privileges required for Coherence to function are specified in the security.policy file which is included as part of the Coherence installation. This file can be found in coherence/lib/security/security.policy. If using the Java Security Manager these privileges must be granted in order for Coherence to function properly.

Application Instrumentation

Be cautious when using instrumented management and monitoring solutions.

Some Java-based management and monitoring solutions use instrumentation (e.g. bytecode-manipulation and ClassLoader substitution). While there are no known open issues with the latest versions of the primary vendors, Oracle has observed issues in the past.

Coherence Editions and Modes

During development, use the development mode.

The Coherence download includes a fully functional Coherence product supporting all editions and modes. The default configuration is for Grid Edition in Development mode.

Coherence may be configured to operate in either development or production mode. These modes do not limit access to features, but instead alter some default configuration settings. For instance, development mode allows for faster cluster startup to ease the development process.

It is recommended to use the development mode for all pre-production activities, such as development and testing. This is an important safety feature, because Coherence automatically prevents these nodes from joining a production cluster. The production mode must be explicitly specified when using Coherence in a production environment.

Coherence may be configured to support a limited feature set, based on the customer license agreement.

Only the edition and the number of licensed CPUs specified within the customer license agreement can be used in a production environment.

When operating outside of the production environment it is allowable to run any Coherence edition. However, it is recommended that only the edition specified within the customer license agreement be utilized. This will protect the application from unknowingly making use of unlicensed features.

All nodes within a cluster must use the same license edition and mode.

Starting with Oracle Coherence 3.3, customer-specific license keys are no longer part of product deployment.

Be sure to obtain enough licenses for the all the cluster members in the production environment. The servers hardware configuration (number or type of processor sockets, processor packages or CPU cores) may be verified using ProcessorInfo utility included with Coherence.

java -cp tangosol.jar com.tangosol.license.ProcessorInfo

If the result of the ProcessorInfo program differs from the licensed configuration, send the program's output and the actual configuration to the "support" email address at Oracle.

How are the edition and mode configured?

There is a <license-config> configuration section in tangosol-coherence.xml (located in coherence.jar) for edition and mode related information.

  <license-config>
    <edition-name system-property="tangosol.coherence.edition">GE</edition-name>
    <license-mode system-property="tangosol.coherence.mode">dev</license-mode>
  </license-config>

In addition to preventing mixed mode clustering, the license-mode also dictates the operational override file which will be used. When in "dev" mode the tangosol-coherence-override-dev.xml file will be utilized, whereas the tangosol-coherence-override-prod.xml file will be used when the "prod" mode is specified. As the mode controls which override file is used, the <license-mode> configuration element is only usable in the base tangosol-coherence.xml file and not within the override files.

These elements are defined by the corresponding coherence.dtd in coherence.jar. It is possible to specify this edition on the command line using the command line override feature:

-Dtangosol.coherence.edition=RTC

Valid values are:

Value Coherence Edition Compatible Editions
GE Grid Edition RTC, DC
EE Enterprise Edition DC
SE Standard Edition DC
RTC Real-Time Client GE
DC Data Client GE, EE, SE
  • Note: clusters running different editions may connect via the Coherence*Extend as a Data Client.

Ensuring that RTC nodes don't use Coherence TCMP

The RTC nodes can connect to clusters using either Coherence TCMP or Coherence Extend. If the intention is to connect over Extend it is advisable to disable TCMP on that node to ensure that it only connects via Extend. TCMP may be disabled using the system propery tangosol.coherence.tcmp.enabled.

Coherence Operational Configuration

Operational configuration relates to the configuration of Coherence at the cluster level including such things as:

The operational aspects are normally configured via the tangosol-coherence-override.xml file.

The contents of this file will likely differ between development and production. It is recommended that that these variants be maintained independently due to the significant differences between these environments. The production operational configuration file should not be the responsibility of the application developers, instead it should fall under the jurisdiction of the systems administrators who are far more familiar with the workings of the production systems.

All cluster nodes should utilize the same operational configuration descriptor. A centralized configuration file may be maintained and accessed by specifying the file's location as a URL using the tangosol.coherence.override system property. Any node specific values may be specified via system properties.

The override file should contain only the subset of configuration elements which you wish to customize. This will not only make your configuration more readable, but will allow you to take advantage of updated defaults in future Coherence releases. All override elements should be copied exactly from the original tangosol-coherence.xml, including the id attribute of the element.

Member descriptors may be used to provide detailed identity information that is useful for defining the location and role of the cluster member. Specifying these items will aid in the management of large clusters by making it easier to identify the role of a remote nodes if issues arise.

Coherence Cache Configuration

Cache configuration relates to the configuration of Coherence at a per-cache level including such things as:

The cache configuration aspects are normally configured via the coherence-cache-config.xml file.

The default coherence-cache-config.xml file included within coherence.jar is intended only as an example and is not suitable for production use. It is suggested that you produce your own cache configuration file with definitions tailored to your application needs.

All cluster nodes should utilize the same cache configuration descriptor. A centralized configuration file may be maintained and accessed by specifying the file's location as a URL using the tangosol.coherence.cacheconfig system property.

Choose the cache topology which is most appropriate for each cache's usage scenario.

It is important to size limit your caches based on the allocated JVM heap size. Even if you never expect to fully load the cache, having the limits in place will help protect your application from OutOfMemoryExceptions if your expectations are later negated.

For a 1GB heap that at most ¾ of the heap be allocated for cache storage. With the default one level of data redundancy this implies a per server cache limit of 375MB for primary data, and 375MB for backup data. The amount of memory allocated to cache storage should fit within the tenured heap space for the JVM. See Sun's GC tuning guide for details.

It is important to note that when multiple cache schemes are defined for the same cache service name the first to be loaded will dictate the service level parameters. Specifically the partition-count, backup-count and thread-count are shared by all caches of the same service.

For multiple caches which use the same cache service it is recommended that the service related elements be defined only once, and that they be inherited by the various cache-schemes which will use them.

If you desire different values for these items on a cache by cache basis then multiple services may be configured.

For partitioned caches Coherence will evenly distribute the storage responsibilities to all cache servers, regardless of their cache configuration or heap size. For this reason it is recommended that all cache server processes be configured with the same heap size. For machines with additional resources multiple cache servers may be utilized to effectively make use of the machine's resources.

To ensure even storage responsibility across a partitioned cache the partition-count should be set to a prime number which is at least the square of the number of cache servers which will be used.

For caches which are backed by a cache store it is recommended that the parent service be configured with a thread pool as requests to the cache store may block on I/O. The pool is enabled via the thread-count element. For non-CacheStore-based caches more threads are unlikely to improve performance and should left disabled.

Unless explicitly specified all cluster nodes will be storage enabled, i.e. will act as cache servers. It is important to control which nodes in your production environment will be storage enabled and storage disabled. The tangosol.coherence.distributed.localstorage system property may be utilized to control this, setting it to either true or false. Generally only dedicated cache servers, all other cluster nodes should be configured as storage disabled. This is especially important for short lived processes which may join the cluster perform some work, and exit the cluster, having these nodes as storage disable will introduce unneeded repartitioning.

Large Cluster Configuration

Are there special considerations for large clusters?

  • The general recommendation for the partition count is to be a prime number close to the square of the number of storage enabled nodes. While is is a good suggestion for small to medium sized clusters, for large clusters it can add too much overhead. For clusters exceeding 128 storage enabled JVMs, the partition count should be fixed, at roughly 16,381.
  • Coherence clusters which consist of over 400 TCMP nodes need to increase the default maximum packet size Coherence will utilize. The default of 1468 should be increased relative to the size of the cluster, i.e. a 600 node cluster would need the maximum packet size increased by 50%. The maximum packet size is configured as part of the coherence operational configuration file, see packet-size for details on changing this setting.
  • For large clusters which have hundreds of JVMs it is also recommended that multicast be enabled, as it will allow for more efficient cluster wide transmissions. These cluster wide transmissions are rare, but when they do occur multicast can provide noticeable benefits in large clusters.

Other Resources


Attachments:
server.gif (image/gif)
desktop.gif (image/gif)