Skip Headers
Oracle® Coherence Developer's Guide
Release 3.7

Part Number E18677-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

9 Tuning TCMP Behavior

This chapter provides instructions for changing default TCMP settings. A brief overview of TCMP is also provided. See "Understanding TCMP" for additional details on TCMP. Also, see Oracle Coherence Administrator's Guide the which includes many tuning recommendations and instructions.

The following sections are included in this chapter:

9.1 Overview of TCMP Data Transmission

Cluster members communicate using Tangosol Cluster Management Protocol (TCMP). TCMP is an IP-based protocol that is used to discover cluster members, manage the cluster, provision services, and transmit data. TCMP is an asynchronous protocol; communication is never blocking even when many threads on a server are communicating at the same time. Asynchronous communication also means that the latency of the network (for example, on a routed network between two different sites) does not affect cluster throughput, although it affects the speed of certain operations.

The TCMP protocol is very tunable to take advantage of specific network topologies, or to add tolerance for low-bandwidth and high-latency segments in a geographically distributed cluster. Coherence comes with a pre-set configuration. Some TCMP attributes are dynamically self-configuring at run time, but can also be overridden and locked down for deployment purposes. TCMP behavior should always be changed based on performance testing. Coherence includes a datagram test that is used to evaluate TCMP data transmission performance over the network. See Oracle Coherence Administrator's Guide for instructions on using the datagram test utility to test network performance.

TCMP data transmission behavior is configured within the tangosol-coherence-override.xml file using the <packet-publisher>, <packet-speaker>, <incoming-message-handler>, and <outgoing-message-handler> elements. See Appendix A, "Operational Configuration Elements," for a reference of all TCMP-related elements that are discussed in this chapter.

9.2 Throttling Data Transmission

The speed at which data is transmitted is controlled using the <flow-control> and <traffic-jam> elements. These elements can help achieve the greatest throughput with the least amount of packet failure. The throttling settings discussed in this section are typically changed when dealing with slow networks, or small packet buffers.

The following topics are included in this section:

9.2.1 Adjusting Packet Flow Control Behavior

Flow control is used to dynamically throttle the rate of packet transmission to a given cluster member based on point-to-point transmission statistics which measure the cluster member's responsiveness. Flow control stops a cluster member from being flooded with packets while it is incapable of responding.

Flow control is configured within the <flow-control> element. There are two settings that are used to adjust flow control behavior:

  • <pause-detection> – This setting controls the maximum number of packets that are resent to an unresponsive cluster member before determining that the member is paused. When a cluster member is marked as paused, packets addressed to it are sent at a lower rate until the member resumes responding. Pauses are typically due to long garbage collection intervals. The value is specified using the <maximum-packets> element and defaults to 16 packets. A value of 0 disables pause detection.

  • <outstanding-packets> – This setting is used to define the number of unconfirmed packets that are sent to a cluster member before packets addressed to that member are deferred. The value may be specified as either an explicit number by using the <maximum-packets> element, or as a range by using both a <maximum-packets> and <minimum-packets> elements. When a range is specified, this setting is dynamically adjusted based on network statistics. The maximum value should always be greater than 256 packets and defaults to 4096 packets. The minimum range should always be greater than 16 packets an defaults to 64 packets.

To adjust flow control behavior, edit the operational override file and add the <pause-detection> and <outstanding-packets> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <flow-control>
               <pause-detection>
                  <maximum-packets>32</maximum-packets>
               </pause-detection>
               <outstanding-packets>
                  <maximum-packets>2048</maximum-packets>
                  <minimum-packets>128</minimum-packets>
               </outstanding-packets>
            </flow-control>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

9.2.2 Disabling Packet Flow Control

To disable flow control, edit the operational override file and add an <enabled> element, within the <flow-control> element, that is set to false. For example

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <flow-control>
               <enabled>false</enabled>
            </flow-control>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

9.2.3 Adjusting Packet Traffic Jam Behavior

A packet traffic jam is when the number of pending packets that are enqueued by client threads for the packet publisher to transmit on the network grows to a level that the packet publisher considers intolerable. Traffic jam behavior is configured within the <traffic-jam> element. There are two settings that are used to adjust traffic jam behavior:

  • <maximum-packets> – This setting controls the maximum number of pending packets that the packet publisher tolerates before determining that it is clogged and must slow down client requests (requests from local non-system threads). When the configured maximum packets limit is exceeded, client threads are forced to pause until the number of outstanding packets drops below the specified limit. This setting prevents most unexpected out-of-memory conditions by limiting the size of the resend queue. A value of 0 means no limit. The default value is 8192.

  • <pause-milliseconds> – This setting controls the number of milliseconds that the publisher pauses a client thread that is trying to send a message when the publisher is clogged. The publisher does not allow the message to go through until the clog is gone, and repeatedly sleeps the thread for the duration specified by this property. The default value is 10.

Specifying a packet limit which is to low, or a pause which is to long, may result in the publisher transmitting all pending packets and being left without packets to send. A warning is periodically logged if this condition is detected. Ideal values ensure that the publisher is never left without work to do, but at the same time prevent the queue from growing uncontrollably. The pause should be set short (10ms or under) and the limit on the number of packets be set high (that is, greater than 5000).

When the <traffic-jam> element is used with the <flow-control> element, the setting operates in a point-to-point mode, only blocking a send if the recipient has too many packets outstanding. It is recommended that the <traffic-jam> element's <maximum-packets> subelement value be greater than the <maximum-packets> value for the <outstanding-packets> element. When <flow-control> is disabled, the <traffic-jam> setting takes all outstanding packets into account.

To adjust the enqueue rate behavior, edit the operational override file and add the <maximum-packets> and <pause-milliseconds> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <traffic-jam>
            <maximum-packets>8192</maximum-packets>
            <pause-milliseconds>10</pause-milliseconds>
         </traffic-jam>
      </packet-publisher>
   </cluster-config>
</coherence>

9.3 Bundling Packets to Reduce Load

Multiple small packets can be bundled into a single larger packet to reduce the load on the network switching infrastructure. Packet bundling is configured within the <packet-bundling> element and includes the following settings:

The default packet-bundling settings are minimally aggressive allowing for bundling to occur without adding a measurable delay. The benefits of more aggressive bundling is based on the network infrastructure and the application object's typical data sizes and access patterns.

To adjust packet bundling behavior, edit the operational override file and add the <maximum-defferal-time> and <agression-factor> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <packet-bundling>
               <maximum-deferral-time>1us</maximum-deferral-time>
               <aggression-factor>0</aggression-factor>
            </packet-bundling>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

9.4 Changing Packet Retransmission Behavior

TCMP utilizes notification packets to acknowledge the receipt of packets which require confirmation. A positive acknowledgment (ACK) packet indicates that a packet was received correctly and that the packet must not be resent. Multiple ACKs for a given sender are batched into a single ACK packet to avoid wasting network bandwidth with many small ACK packets. Packets that have not been acknowledged are retransmitted based on the packet publisher's configured resend interval.

A negative acknowledgment (NACK) packet indicates that the packet was received incorrectly and causes the packet to be retransmitted. Negative acknowledgment is determined by inspecting packet ordering for packet loss. Negative acknowledgment causes a packet to be resent much quicker than relying on the publisher's resend interval. See "Disabling Negative Acknowledgments" to disable negative acknowledgments.

The following topics are included in this section:

9.4.1 Changing the Packet Resend Interval

The packet resend interval specifies the minimum amount of time, in milliseconds, that the packet publisher waits for a corresponding ACK packet, before resending a packet. The default resend interval is 200 milliseconds.

To change the packet resend interval, edit the operational override file and add a <resend-milliseconds> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <resend-milliseconds>400</resend-milliseconds>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

9.4.2 Changing the Packet Resend Timeout

The packet resend timeout interval specifies the maximum amount of time, in milliseconds, that a packet continues to be resent if no ACK packet is received. After this timeout expires, a determination is made if the recipient is to be considered terminated. This determination takes additional data into account, such as if other nodes are still able to communicate with the recipient. The default value is 300000 milliseconds. For production environments, the recommended value is the greater of 300000 and two times the maximum expected full GC duration.

Note:

The default death detection mechanism is the TCP-ring listener, which detects failed cluster members before the resend timeout interval is ever reached. See "Configuring Death Detection" for more information on death detection.

To change the packet resend timeout interval, edit the operational override file and add a <timeout-milliseconds> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <timeout-milliseconds>420000</timeout-milliseconds>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

9.4.3 Configuring Packet Acknowledgment Delays

The amount of time the packet publisher waits before sending ACK and NACK packets can be changed as required. The ACK and NACK packet delay intervals are configured within the <notification-queueing> eminent using the following settings:

  • <ack-delay-milliseconds> – This element specifies the maximum number of milliseconds that the packet publisher delays before sending an ACK packet. The ACK packet may be transmitted earlier if multiple batched acknowledgments fills the ACK packet. This value should be set substantially lower then the remote member's packet delivery resend timeout to allow ample time for the ACK to be received and processed before the resend timeout expires. The default value is 16.

  • <nack-delay-milliseconds> – This element specifies the number of milliseconds that the packet publisher delays before sending a NACK packet. The default value is 1.

To change the ACK and NACK delay intervals, edit the operational override file and add the <ack-delay-milliseconds> and <nack-delay-milliseconds> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <notification-queueing>
            <ack-delay-milliseconds>32</ack-delay-milliseconds>
            <nack-delay-milliseconds>1</nack-delay-milliseconds>
         </notification-queueing>
      </packet-publisher>
   </cluster-config>
</coherence>

9.5 Configuring the Transmission Packet Pool Size

The transmission packet pool is a buffer for use in transmitting UDP packets. Unlike the packet buffers, this buffer is internally managed by Coherence rather then the operating system and is allocated on the JVM's heap.

The packet pool is used as a reusable buffer between Coherence network services and allows for faster socket-layer processing at the expense of increased memory usage. The pool is initially empty and grows on demand up to the specified size limit; therefore, memory is reserved only when it is needed which allows the buffer to conserve memory.

The transmission packet pool size controls the maximum number of packets which can be queued on the packet speaker before the packet publisher must block. The pool is configured within the <packet-publisher> node using the <packet-pool> element. The <size> element is used to specify the maximum size of the pool. The value is entered in bytes. By default, the size is unspecified and the default value is 0. A zero value indicates that the buffer is calculated by factoring the preferred MTU size with 2048. If a size is explicitly defined, then the number of packets is calculated as pool size/MTU size. See "Configuring the Incoming Handler's Packet Pool" for instructions on configuring the incoming handler's packet pool size.

To configure the transmission packet pool size, edit the operational override file and add the <packet-pool> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-pool>
            <size>3072</size>
         </packet-pool>
      </packet-publisher>
   </cluster-config>
</coherence>

9.6 Configuring the Size of the Packet Buffers

Packet buffers are operating system buffers used by datagram sockets (also referred to as socket buffers). Packet buffers can be configured to control how many packets the operating system is requested to buffer. Packet buffers are used by unicast and multicast listeners (inbound buffers) and by the packet publisher (outbound buffer).

The following topics are included in this section:

9.6.1 Understanding Packet Buffer Sizing

Packet buffer size can be configured based on either the number of packets or based on bytes using the following settings:

  • <maximum-packets> – This setting specifies the number of packets (based on the configured packet size) that the datagram socket is asked to size itself to buffer. See java.net.SocketOptions#SO_SNDBUF and java.net.SocketOptions#SO_RCVBUF properties for additional details. Actual buffer sizes may be smaller if the underlying socket implementation cannot support more than a certain size.

  • <size> – Specifies the requested size of the underlying socket buffer in bytes.

The operating system only treats the specified packet buffer size as a hint and is not required to allocate the specified amount. In the event that less space is allocated then requested, Coherence issues a warning and continues to operate with the constrained buffer, which may degrade performance. See Oracle Coherence Administrator's Guide for details on configuring your operating system to allow larger buffers.

Large inbound buffers can help insulate the Coherence network layer from JVM pauses that are caused by the Java Garbage Collector. While the JVM is paused, Coherence cannot dequeue packets from any inbound socket. If the pause is long enough to cause the packet buffer to overflow, the packet reception is delayed as the originating node must detect the packet loss and retransmit the packet(s).

9.6.2 Configuring the Outbound Packet Buffer Size

The outbound packet buffer is used by the packet publisher when transmitting packets. When making changes to the buffer size, performance should be evaluated both in terms of throughput and latency. A large buffer size may allow for increased throughput, while a smaller buffer size may allow for decreased latency.

To configure the outbound packet buffer size, edit the operational override file and add a <packet-buffer> element within the <packet-publisher> node and specify the packet buffer size using either the <size> element (for bytes) or the <maximum-packets> element (for packets). The default value is 32 packets. The following example demonstrates specifying the packet buffer size based on the number of packets:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-buffer>
            <maximum-packets>64</maximum-packets>
         </packet-buffer>
      </packet-publisher>
   </cluster-config>
</coherence>

9.6.3 Configuring the Inbound Packet Buffer Size

The multicast listener and unicast listener each have their own inbound packet buffer. To configure an inbound packet buffer size, edit the operational override file and add a <packet-buffer> element (within either a <multicast-listener> or <unicast-listener> node, respectively) and specify the packet buffer size using either the <size> element (for bytes) or the <maximum-packets> element (for packets). The default value is 64 packets for the multicast listener and 1428 packets for the unicast listener.

The following example specifies the packet buffer size for the unicast listener and is entered using bytes:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <unicast-listener>
         <packet-buffer>
            <size>1500000</size>
         </packet-buffer>
      </unicast-listener>
   </cluster-config>
</coherence>

The following example specifies the packet buffer size for the multicast listener and is entered using packets:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-buffer>
            <maximum-packets>128</maximum-packets>
         </packet-buffer>
      </packet-publisher>
   </cluster-config>
</coherence>

9.7 Adjusting the Maximum Size of a Packet

The maximum and preferred UDP packet sizes can be adjusted to optimize the efficiency and throughput of cluster communication. All cluster nodes must use identical maximum packet sizes. For optimal network utilization, this value should be 32 bytes less then the network maximum transmission unit (MTU).

Note:

When specifying a UDP packet size larger then 1024 bytes on Microsoft Windows a registry setting must be adjusted to allow for optimal transmission rates. The COHRENCE_HOME/bin/optimize.reg registration file contains the registry settings. See Oracle Coherence Administrator's Guide for details on setting the Datagram size on Windows.

Packet size is configured within the <packet-size> element and includes the following settings:

To adjust the packet size, edit the operational override file and add the <maximum-length> and <preferred-length> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-size>
            <maximum-length>49152</maximum-length>
            <preferred-length>1500</preferred-length>
         </packet-size>
      </packet-publisher>
   </cluster-config>
</coherence>

9.8 Changing the Packet Speaker Volume Threshold

The packet speaker is responsible for sending packets on the network when the packet-publisher detects that a network send operation is likely to block. This allows the packet publisher to avoid blocking on I/O and continue to prepare outgoing packets. The packet publisher dynamically chooses whether to use the speaker as the packet load changes.

When the packet load is relatively low it may be more efficient for the speaker's operations to be performed on the publisher's thread. When the packet load is high using the speaker allows the publisher to continue preparing packets while the speaker transmits them on the network.

The packet speaker is configured using the <volume-threshold> element to specify the minimum number of packets which must be ready to be sent for the speaker daemon to be activated. A value of 0 forces the speaker to always be used, while a very high value causes it to never be used. If the value is unspecified (the default), it is set to match the packet buffer.

To specify the packet speaker volume threshold, edit the operational override file and add the <volume-threshold> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-speaker>
         <volume-threshold>
            <minimum-packets>32</minimum-packets>
         </volume-threshold>
      </packet-speaker>
   </cluster-config>
</coherence>

9.9 Changing Message Handler Behavior

Cluster services transmit and receive data using message handlers. There is handler for processing incoming data and a handler for processing outgoing data. Both handlers have settings that can be configured as required.

The following topics are included in this section:

9.9.1 Configuring the Incoming Message Handler

The incoming message handler assembles UDP packets into logical messages and dispatches them to the appropriate Coherence service for processing. The incoming message handler is configured within the <incoming-message-handler> element.

The following topics are included in this section:

9.9.1.1 Changing the Time Variance

The <maximum-time-variance> element specifies the maximum time variance between sending and receiving broadcast messages when trying to determine the difference between a new cluster member's system time and the cluster time. The smaller the variance, the more certain one can be that the cluster time is closer between multiple systems running in the cluster; however, the process of joining the cluster is extended until an exchange of messages can occur within the specified variance. Normally, a value as small as 20 milliseconds is sufficient; but, with heavily loaded clusters and multiple network hops, a larger value may be necessary. The default value is 16.

To change the maximum time variance, edit the operational override file and add the <maximum-time-variance> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <incoming-message-handler>
         <maximum-time-variance>16</maximum-time-variance>
      </incoming-message-handler>
   </cluster-config>
</coherence>

9.9.1.2 Disabling Negative Acknowledgments

Negative acknowledgments can be disabled for the incoming message handler. When disabled, the handler does not notify the packet sender if packets were received incorrectly. In this case, the packet sender waits the specified resend timeout interval before resending the packet. See "Changing Packet Retransmission Behavior" for more information on packet acknowledgments.

To disable negative acknowledgment, edit the operational override file and add a <use-nack-packets> element that is set to false. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <incoming-message-handler>
         <use-nack-packets>false</use-nack-packets>
      </incoming-message-handler>
   </cluster-config>
</coherence>

9.9.1.3 Configuring the Incoming Handler's Packet Pool

The incoming packet pool is a buffer for use in receiving UDP packets. Unlike the packet buffers, this buffer is internally managed by Coherence rather then the operating system and is allocated on the JVM's heap.

The packet pool is used as a reusable buffer between Coherence network services and allows for faster socket-layer processing at the expense of increased memory usage. The pool is initially empty and grows on demand up to the specified size limit; therefore, memory is reserved only when it is needed which allows the buffer to conserve memory.

The incoming handler's packet pool size controls the number of packets which can be queued before the unicast listener and multicast listener must block. The pool is configured within the <incoming-message-handler> node using the <packet-pool> element. The <size> element is used to specify the maximum size of the pool. The value is entered in bytes. By default, the size is unspecified and the default value is 0. A zero value indicates that the buffer is calculated by factoring the preferred MTU size with 2048. If a size is explicitly defined, then the number of packets is calculated as pool size/MTU size. See "Configuring the Transmission Packet Pool Size" for instructions on configuring the packet pool size used to transmit packets.

To configure the incoming handler's packet pool size, edit the operational override file and add the <size> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <incoming-message-handler>
         <packet-pool>
            <size>3072</size>
         </packet-pool>
      </incoming-message-handler>
   </cluster-config>
</coherence>

9.9.2 Configuring the Outgoing Message Handler

The outgoing message handler is used by cluster services to process messages that are to be transmitted. The outgoing message handler uses a specialized message pool whose size can be configured as required. The outgoing message handler can also be configured to use a network filter. See Chapter 10, "Using Network Filters," for detailed information on using network filters. The outgoing message handler is configured within the <outgoing-message-handler> element.

9.9.2.1 Configuring the Outgoing Handler's Message Pool

The outgoing message handler uses a message pool to control how many message buffers are pooled for message transmission. Pooling message buffers relieves the pressure on the JVM garbage collector by pooling the memory resources needed for messaging.

The message pool contains any number of segments of a specified size. For example, a pool with 4 segments and a segment size of 10MB can hold, at most, 40 MB of space for serialization. The number of segments and the segment size are defined using the <segment> and <segment-size> elements, respectively.

Each pool segment stores message buffers of a specific size. The smallest size buffer is defined by the <min-buffer-size> element. The next buffer size for the next segment is then calculated using bitwise left shift using the <growth-factor> value ('min-buffer-size' << growth-factor). A left shift by n is equivalent to multiplying by 2n; where n is the growth factor value. For a growth factor of 2, multiply the minimum buffer size by 4. For a growth factory of 3, multiply the minimum buffer size by 8, and so on.

The following example shows the default pool values and results in a message pool that is 64MB in total size where: the first pool segment contains message buffers of 1KB; the second pool segment contains message buffers of 4KB; the third pool segment contains message buffers of 16KB; and the fourth pool segment contains message buffers of 64KB. Using the same default values but increasing the growth factor to 3, results in buffer sizes of 1KB, 8KB, 64KB, and 512KB.

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <outgoing-message-handler>
         <message-pool>
            <segments>4</segments>
            <segment-size>16MB</segment-size>
            <min-buffer-size>1KB</min-buffer-size>
            <growth-factor>2</growth-factor>
         </message-pool>
      </outgoing-message-handler>
   </cluster-config>
</coherence>

Space that is claimed for network buffers (in and out) and serialization buffers is periodically reclaimed when the capacity is higher than the actual usage.

9.10 Changing the TCMP Socket Provider Implementation

Coherence provides three underlying socket provider implementations for use by TCMP:

custom socket providers can also be enabled as required. Socket providers for use by TCMP are configured for the unicast listener within the <unicast-listener> element.

The following topics are included in this section:

9.10.1 Using the TCP Socket Provider

The TCP socket provider is a socket provider which, whenever possible, produces TCP-based sockets. This socket provider creates DatagramSocket instances which are backed by TCP. When used with the WKA feature (mulitcast disabled), TCMP functions entirely over TCP without the need for UDP.

Note:

if this socket provider is used without the WKA feature (multicast enabled), TCP is used for all unicast communications; while, multicast is utilized for group based communications.

The TCP socket provider uses up to two TCP connections between each pair of cluster members. No additional threads are added to manage the TCP traffic as it is all done using nonblocking NIO based sockets. Therefore, the existing TCMP threads handle all the connections. The connections are brought up on demand and are automatically reopened as needed if they get disconnected for any reason. Two connections are utilized because it reduces send/receive contention and noticeably improves performance. TCMP is largely unaware that it is using a reliable protocol and as such still manages guaranteed delivery and flow control.

To specify the TCP socket provider, edit the operational override file and add a <socket-provider> element that includes the tcp value. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <unicast-listener>
         <socket-provider system-property="tangosol.coherence.socketprovider">tcp
         </socket-provider>
      </unicast-listener>
   </cluster-config>
</coherence>

The tangosol.coherence.socketprovider system property is used to specify the socket provider instead of using the operational override file. For example:

-Dtangosol.coherence.socketprovider=tcp

9.10.2 Using the SSL Socket Provider

The SSL socket provider is a socket provider which only produces SSL protected sockets. This socket provider creates DatagramSocket instances which are backed by SSL/TCP. SSL is not supported for multicast sockets; therefore, the WKA feature (multicast disabled) must be used for TCMP to function with this provider.

The default SSL configuration allows for easy configuration of two-way SSL connections, based on peer trust where every trusted peer resides within a single JKS keystore. More elaborate configuration can be defined with alternate identity and trust managers to allow for Certificate Authority trust validation. See Oracle Coherence Security Guide for detailed instructions on configuring and using SSL with TCMP.

9.10.3 Enabling a Custom Socket Provider

Custom socket providers can be created and enabled for use by TCMP as required. Custom socket providers must implement the com.tangosol.net.SocketProvider interface. See Oracle Coherence Java API Reference for details on this API.

Custom socket providers are enabled within the <socket-provider> element using the <instance> element. The preferred approach is to use the <socket-provider> element to reference a custom socket provider configuration that is defined within the <socket-providers> node.

The following example demonstrates enabling a custom socket provider by referencing a provider named mySocketProvider which is implemented in the MySocketProvider class. The provider is referenced using the name defined in the id attribute.

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <unicast-listener>
         <socket-provider>mySocketProvider</socket-provider>
      </unicast-listener>

      <socket-providers>
         <socket-provider id="mySocketProvider">
            <instance>
               <class-name>package.MySocketProvider</class-name>
            </instance>
         </socket-provider>
      </socket-providers>
   </cluster-config>
</coherence>

As an alternative, the <instance> element supports the use of a <class-factory-name> element to use a factory class that is responsible for creating SocketProvider instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. The following example gets a custom socket provider instance using the createProvider method on the MySocketProviderFactory class.

<socket-providers>
   <socket-provider id="mySocketProvider">
      <instance>
         <class-factory-name>package.MySocketProviderFactory</class-factory-name>
         <method-name>createProvider</method-name>
      </instance>
   </socket-provider>
</socket-providers>

Any initialization parameters that are required for an implementation can be specified using the <init-params> element. The following example sets the iMaxTimeout parameter to 2000.

<socket-providers>
   <socket-provider id="mySocketProvider">
      <instance>
         <class-name>package.MySocketProvider</class-name>
         <init-params>
            <init-param>
               <param-name>iMaxTimeout</param-name>
               <param-value>2000</param-value>
            </init-param>
         </init-params>
      </instance>
   </socket-provider>
</socket-providers>