Fusion Middleware Documentation
Advanced Search


Developing Applications with Oracle Coherence
Close Window

Table of Contents

Show All | Collapse

10 Tuning TCMP Behavior

This chapter provides instructions for changing default TCMP settings. A brief overview of TCMP is also provided. See "Understanding TCMP" for additional details on TCMP. Also, see Administering Oracle Coherence which includes many tuning recommendations and instructions.

This chapter includes the following sections:

10.1 Overview of TCMP Data Transmission

Cluster members communicate using Tangosol Cluster Management Protocol (TCMP). TCMP is an IP-based protocol that is used to discover cluster members, manage the cluster, provision services, and transmit data. TCMP is an asynchronous protocol; communication is never blocking even when many threads on a server are communicating at the same time. Asynchronous communication also means that the latency of the network (for example, on a routed network between two different sites) does not affect cluster throughput, although it affects the speed of certain operations.

The TCMP protocol is very tunable to take advantage of specific network topologies, or to add tolerance for low-bandwidth and high-latency segments in a geographically distributed cluster. Coherence comes with a pre-set configuration. Some TCMP attributes are dynamically self-configuring at run time, but can also be overridden and locked down for deployment purposes. TCMP behavior should always be changed based on performance testing. Coherence includes a datagram test that is used to evaluate TCMP data transmission performance over the network. See Administering Oracle Coherence for instructions on using the datagram test utility to test network performance.

TCMP data transmission behavior is configured within the tangosol-coherence-override.xml file using the <packet-publisher>, <packet-speaker>, <incoming-message-handler>, and <outgoing-message-handler> elements. See Appendix A, "Operational Configuration Elements," for a reference of all TCMP-related elements that are discussed in this chapter.

10.2 Throttling Data Transmission

The speed at which data is transmitted is controlled using the <flow-control> and <traffic-jam> elements. These elements can help achieve the greatest throughput with the least amount of packet failure. The throttling settings discussed in this section are typically changed when dealing with slow networks, or small packet buffers.

This section includes the following topics:

10.2.1 Adjusting Packet Flow Control Behavior

Flow control is used to dynamically throttle the rate of packet transmission to a given cluster member based on point-to-point transmission statistics which measure the cluster member's responsiveness. Flow control stops a cluster member from being flooded with packets while it is incapable of responding.

Flow control is configured within the <flow-control> element. There are two settings that are used to adjust flow control behavior:

  • <pause-detection> – This setting controls the maximum number of packets that are resent to an unresponsive cluster member before determining that the member is paused. When a cluster member is marked as paused, packets addressed to it are sent at a lower rate until the member resumes responding. Pauses are typically due to long garbage collection intervals. The value is specified using the <maximum-packets> element and defaults to 16 packets. A value of 0 disables pause detection.

  • <outstanding-packets> – This setting is used to define the number of unconfirmed packets that are sent to a cluster member before packets addressed to that member are deferred. The value may be specified as either an explicit number by using the <maximum-packets> element, or as a range by using both a <maximum-packets> and <minimum-packets> elements. When a range is specified, this setting is dynamically adjusted based on network statistics. The maximum value should always be greater than 256 packets and defaults to 4096 packets. The minimum range should always be greater than 16 packets an defaults to 64 packets.

To adjust flow control behavior, edit the operational override file and add the <pause-detection> and <outstanding-packets> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <flow-control>
               <pause-detection>
                  <maximum-packets>32</maximum-packets>
               </pause-detection>
               <outstanding-packets>
                  <maximum-packets>2048</maximum-packets>
                  <minimum-packets>128</minimum-packets>
               </outstanding-packets>
            </flow-control>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

10.2.2 Disabling Packet Flow Control

To disable flow control, edit the operational override file and add an <enabled> element, within the <flow-control> element, that is set to false. For example

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <flow-control>
               <enabled>false</enabled>
            </flow-control>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

10.2.3 Adjusting Packet Traffic Jam Behavior

A packet traffic jam is when the number of pending packets that are enqueued by client threads for the packet publisher to transmit on the network grows to a level that the packet publisher considers intolerable. Traffic jam behavior is configured within the <traffic-jam> element. There are two settings that are used to adjust traffic jam behavior:

  • <maximum-packets> – This setting controls the maximum number of pending packets that the packet publisher tolerates before determining that it is clogged and must slow down client requests (requests from local non-system threads). When the configured maximum packets limit is exceeded, client threads are forced to pause until the number of outstanding packets drops below the specified limit. This setting prevents most unexpected out-of-memory conditions by limiting the size of the resend queue. A value of 0 means no limit. The default value is 8192.

  • <pause-milliseconds> – This setting controls the number of milliseconds that the publisher pauses a client thread that is trying to send a message when the publisher is clogged. The publisher does not allow the message to go through until the clog is gone, and repeatedly sleeps the thread for the duration specified by this property. The default value is 10.

Specifying a packet limit which is to low, or a pause which is to long, may result in the publisher transmitting all pending packets and being left without packets to send. A warning is periodically logged if this condition is detected. Ideal values ensure that the publisher is never left without work to do, but at the same time prevent the queue from growing uncontrollably. The pause should be set short (10ms or under) and the limit on the number of packets be set high (that is, greater than 5000).

When the <traffic-jam> element is used with the <flow-control> element, the setting operates in a point-to-point mode, only blocking a send if the recipient has too many packets outstanding. It is recommended that the <traffic-jam> element's <maximum-packets> subelement value be greater than the <maximum-packets> value for the <outstanding-packets> element. When <flow-control> is disabled, the <traffic-jam> setting takes all outstanding packets into account.

To adjust the enqueue rate behavior, edit the operational override file and add the <maximum-packets> and <pause-milliseconds> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <traffic-jam>
            <maximum-packets>8192</maximum-packets>
            <pause-milliseconds>10</pause-milliseconds>
         </traffic-jam>
      </packet-publisher>
   </cluster-config>
</coherence>

10.3 Bundling Packets to Reduce Load

Multiple small packets can be bundled into a single larger packet to reduce the load on the network switching infrastructure. Packet bundling is configured within the <packet-bundling> element and includes the following settings:

  • <maximum-defferal-time> – This setting specifies the maximum amount of time to defer a packet while waiting for additional packets to bundle. A value of zero results in the algorithm not waiting, and only bundling the readily accessible packets. A value greater than zero causes some transmission deferral while waiting for additional packets to become available. This value is typically set below 250 microseconds to avoid a detrimental throughput impact. If the units are not specified, nanoseconds are assumed. The default value is 1us (microsecond).

  • <agression-factor> – This setting specifies the aggressiveness of the packet deferral algorithm. Where as the <maximum-deferral-time> element defines the upper limit on the deferral time, the <aggression-factor> influences the average deferral time. The higher the aggression value, the longer the publisher may wait for additional packets. The factor may be expressed as a real number, and often times values between 0.0 and 1.0 allows for high packet utilization while keeping latency to a minimum. The default value is 0.

The default packet-bundling settings are minimally aggressive allowing for bundling to occur without adding a measurable delay. The benefits of more aggressive bundling is based on the network infrastructure and the application object's typical data sizes and access patterns.

To adjust packet bundling behavior, edit the operational override file and add the <maximum-defferal-time> and <agression-factor> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <packet-bundling>
               <maximum-deferral-time>1us</maximum-deferral-time>
               <aggression-factor>0</aggression-factor>
            </packet-bundling>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

10.4 Changing Packet Retransmission Behavior

TCMP utilizes notification packets to acknowledge the receipt of packets which require confirmation. A positive acknowledgment (ACK) packet indicates that a packet was received correctly and that the packet must not be resent. Multiple ACKs for a given sender are batched into a single ACK packet to avoid wasting network bandwidth with many small ACK packets. Packets that have not been acknowledged are retransmitted based on the packet publisher's configured resend interval.

A negative acknowledgment (NACK) packet indicates that the packet was received incorrectly and causes the packet to be retransmitted. Negative acknowledgment is determined by inspecting packet ordering for packet loss. Negative acknowledgment causes a packet to be resent much quicker than relying on the publisher's resend interval. See "Disabling Negative Acknowledgments" to disable negative acknowledgments.

This section includes the following topics:

10.4.1 Changing the Packet Resend Interval

The packet resend interval specifies the minimum amount of time, in milliseconds, that the packet publisher waits for a corresponding ACK packet, before resending a packet. The default resend interval is 200 milliseconds.

To change the packet resend interval, edit the operational override file and add a <resend-milliseconds> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <resend-milliseconds>400</resend-milliseconds>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

10.4.2 Changing the Packet Resend Timeout

The packet resend timeout interval specifies the maximum amount of time, in milliseconds, that a packet continues to be resent if no ACK packet is received. After this timeout expires, a determination is made if the recipient is to be considered terminated. This determination takes additional data into account, such as if other nodes are still able to communicate with the recipient. The default value is 300000 milliseconds. For production environments, the recommended value is the greater of 300000 and two times the maximum expected full GC duration.

Note:

The default death detection mechanism is the TCP-ring listener, which detects failed cluster members before the resend timeout interval is ever reached. See "Configuring Death Detection" for more information on death detection.

To change the packet resend timeout interval, edit the operational override file and add a <timeout-milliseconds> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-delivery>
            <timeout-milliseconds>420000</timeout-milliseconds>
         </packet-delivery>
      </packet-publisher>
   </cluster-config>
</coherence>

10.4.3 Configuring Packet Acknowledgment Delays

The amount of time the packet publisher waits before sending ACK and NACK packets can be changed as required. The ACK and NACK packet delay intervals are configured within the <notification-queueing> eminent using the following settings:

  • <ack-delay-milliseconds> – This element specifies the maximum number of milliseconds that the packet publisher delays before sending an ACK packet. The ACK packet may be transmitted earlier if multiple batched acknowledgments fills the ACK packet. This value should be set substantially lower then the remote member's packet delivery resend timeout to allow ample time for the ACK to be received and processed before the resend timeout expires. The default value is 16.

  • <nack-delay-milliseconds> – This element specifies the number of milliseconds that the packet publisher delays before sending a NACK packet. The default value is 1.

To change the ACK and NACK delay intervals, edit the operational override file and add the <ack-delay-milliseconds> and <nack-delay-milliseconds> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <notification-queueing>
            <ack-delay-milliseconds>32</ack-delay-milliseconds>
            <nack-delay-milliseconds>1</nack-delay-milliseconds>
         </notification-queueing>
      </packet-publisher>
   </cluster-config>
</coherence>

10.5 Configuring the Size of the Packet Buffers

Packet buffers are operating system buffers used by datagram sockets (also referred to as socket buffers). Packet buffers can be configured to control how many packets the operating system is requested to buffer. Packet buffers are used by unicast and multicast listeners (inbound buffers) and by the packet publisher (outbound buffer).

This section includes the following topics:

10.5.1 Understanding Packet Buffer Sizing

Packet buffer size can be configured based on either the number of packets or based on bytes using the following settings:

  • <maximum-packets> – This setting specifies the number of packets (based on the configured packet size) that the datagram socket is asked to size itself to buffer. See java.net.SocketOptions#SO_SNDBUF and java.net.SocketOptions#SO_RCVBUF properties for additional details. Actual buffer sizes may be smaller if the underlying socket implementation cannot support more than a certain size. For details on configuring the packet size, see "Adjusting the Maximum Size of a Packet".

  • <size> – Specifies the requested size of the underlying socket buffer in bytes.

The operating system only treats the specified packet buffer size as a hint and is not required to allocate the specified amount. In the event that less space is allocated then requested, Coherence issues a warning and continues to operate with the constrained buffer, which may degrade performance. See Administering Oracle Coherence for details on configuring your operating system to allow larger buffers.

Large inbound buffers can help insulate the Coherence network layer from JVM pauses that are caused by the Java Garbage Collector. While the JVM is paused, Coherence cannot dequeue packets from any inbound socket. If the pause is long enough to cause the packet buffer to overflow, the packet reception is delayed as the originating node must detect the packet loss and retransmit the packet(s).

10.5.2 Configuring the Outbound Packet Buffer Size

The outbound packet buffer is used by the packet publisher when transmitting packets. When making changes to the buffer size, performance should be evaluated both in terms of throughput and latency. A large buffer size may allow for increased throughput, while a smaller buffer size may allow for decreased latency.

To configure the outbound packet buffer size, edit the operational override file and add a <packet-buffer> element within the <packet-publisher> node and specify the packet buffer size using either the <size> element (for bytes) or the <maximum-packets> element (for packets). The default value is 32 packets. The following example demonstrates specifying the packet buffer size based on the number of packets:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-buffer>
            <maximum-packets>64</maximum-packets>
         </packet-buffer>
      </packet-publisher>
   </cluster-config>
</coherence>

10.5.3 Configuring the Inbound Packet Buffer Size

The multicast listener and unicast listener each have their own inbound packet buffer. To configure an inbound packet buffer size, edit the operational override file and add a <packet-buffer> element (within either a <multicast-listener> or <unicast-listener> node, respectively) and specify the packet buffer size using either the <size> element (for bytes) or the <maximum-packets> element (for packets). The default value is 64 packets for the multicast listener and 1428 packets for the unicast listener.

The following example specifies the packet buffer size for the unicast listener and is entered using bytes:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <unicast-listener>
         <packet-buffer>
            <size>1500000</size>
         </packet-buffer>
      </unicast-listener>
   </cluster-config>
</coherence>

The following example specifies the packet buffer size for the multicast listener and is entered using packets:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-buffer>
            <maximum-packets>128</maximum-packets>
         </packet-buffer>
      </packet-publisher>
   </cluster-config>
</coherence>

10.6 Adjusting the Maximum Size of a Packet

The maximum and preferred UDP packet sizes can be adjusted to optimize the efficiency and throughput of cluster communication. All cluster nodes must use identical maximum packet sizes. For optimal network utilization, this value should be 32 bytes less then the network maximum transmission unit (MTU).

Note:

When specifying a UDP packet size larger then 1024 bytes on Microsoft Windows a registry setting must be adjusted to allow for optimal transmission rates. The COHRENCE_HOME/bin/optimize.reg registration file contains the registry settings. See Administering Oracle Coherence for details on setting the Datagram size on Windows.

Packet size is configured within the <packet-size> element and includes the following settings:

  • <maximum-length> – Specifies the packet size, in bytes, which all cluster members can safely support. This value must be the same for all members in the cluster. A low value can artificially limit the maximum size of the cluster. This value should be at least 512. The default value is 64KB.

  • <preferred-length> – Specifies the preferred size, in bytes, of the DatagramPacket objects that are sent and received on the unicast and multicast sockets.

    This value can be larger or smaller than the <maximum-length> value, and need not be the same for all cluster members. The ideal value is one which fits within the network MTU, leaving enough space for either the UDP or TCP packet headers, which are 32 and 52 bytes respectively.

    This value should be at least 512. A default value is automatically calculated based on the local nodes MTU. An MTU of 1500 is used if the MTU cannot be obtained and is adjusted for the packet headers (1468 for UDP and 1448 for TCP).

To adjust the packet size, edit the operational override file and add the <maximum-length> and <preferred-length> elements as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <packet-size>
            <maximum-length>49152</maximum-length>
            <preferred-length>1500</preferred-length>
         </packet-size>
      </packet-publisher>
   </cluster-config>
</coherence>

10.7 Changing the Packet Speaker Volume Threshold

The packet speaker is responsible for sending packets on the network when the packet-publisher detects that a network send operation is likely to block. This allows the packet publisher to avoid blocking on I/O and continue to prepare outgoing packets. The packet publisher dynamically chooses whether to use the speaker as the packet load changes.

When the packet load is relatively low it may be more efficient for the speaker's operations to be performed on the publisher's thread. When the packet load is high using the speaker allows the publisher to continue preparing packets while the speaker transmits them on the network.

The packet speaker is configured using the <volume-threshold> element to specify the minimum number of packets which must be ready to be sent for the speaker daemon to be activated. If the value is unspecified (the default), it is set to match the packet buffer.

To specify the packet speaker volume threshold, edit the operational override file and add the <volume-threshold> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-speaker>
         <enabled>true</enabled>
         <volume-threshold>
            <minimum-packets>32</minimum-packets>
         </volume-threshold>
      </packet-speaker>
   </cluster-config>
</coherence>

10.8 Configuring the Incoming Message Handler

The incoming message handler assembles UDP packets into logical messages and dispatches them to the appropriate Coherence service for processing. The incoming message handler is configured within the <incoming-message-handler> element.

This section includes the following topics:

10.8.1 Changing the Time Variance

The <maximum-time-variance> element specifies the maximum time variance between sending and receiving broadcast messages when trying to determine the difference between a new cluster member's system time and the cluster time. The smaller the variance, the more certain one can be that the cluster time is closer between multiple systems running in the cluster; however, the process of joining the cluster is extended until an exchange of messages can occur within the specified variance. Normally, a value as small as 20 milliseconds is sufficient; but, with heavily loaded clusters and multiple network hops, a larger value may be necessary. The default value is 16.

To change the maximum time variance, edit the operational override file and add the <maximum-time-variance> element as follows:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <incoming-message-handler>
         <maximum-time-variance>16</maximum-time-variance>
      </incoming-message-handler>
   </cluster-config>
</coherence>

10.8.2 Disabling Negative Acknowledgments

Negative acknowledgments can be disabled for the incoming message handler. When disabled, the handler does not notify the packet sender if packets were received incorrectly. In this case, the packet sender waits the specified resend timeout interval before resending the packet. See "Changing Packet Retransmission Behavior" for more information on packet acknowledgments.

To disable negative acknowledgment, edit the operational override file and add a <use-nack-packets> element that is set to false. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <incoming-message-handler>
         <use-nack-packets>false</use-nack-packets>
      </incoming-message-handler>
   </cluster-config>
</coherence>

10.9 Using Network Filters

A network filter is a mechanism for plugging into the low-level TCMP stream protocol. Every message that is sent across the network by Coherence is streamed through this protocol. Coherence includes a predefined compression filter and also supports custom filters as required.

This section includes the following topics:

Filters are defined in the operational deployment descriptor and must be explicitly enabled within a tangosol-coherence-override.xml file.

Note:

Use filters in an all-or-nothing manner: if one cluster member is using a filter and another is not, the messaging protocol fails. Stop the entire cluster before configuring filters.

10.9.1 Using the Compression Filter

The compression filter is based on the java.util.zip package and compresses message contents to reduce network load. This filter is useful when there is ample CPU available but insufficient network bandwidth. The compression filter is defined in the com.tangosol.net.CompressionFilter class and declared in the operational deployment descriptor within the <filters> node. The compression filter's configured name is gzip, which is used when enabling the filter for specific services or when enabling the filter for all services.

The following topics are included in this section:

10.9.1.1 Enabling the Compression Filter for Specific Services

To enable the compression filter for a specific service, include the <use-filters> element within the service's definition and add a <filter-name> subelement that is set to gzip. The following example configures the Distributed Cache service definition to enable the compression filter. All services that are instances of this service automatically use the filter.

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <services>
         <service id="3">
            <service-type>DistributedCache</service-type>
            <service-component>PartitionedService.PartitionedCache
            </service-component>
            <use-filters>
               <filter-name>gzip</filter-name>
            </use-filters>
         </service>
      </services>
   </cluster-config>
</coherence>

10.9.1.2 Enabling the Compression Filter for All Services

To enable the compression filter for all services, add the <use-filters> element within the <outgoing-message-handler> element and add a <filter-name> subelement that is set to gzip. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <outgoing-message-handler>
         <use-filters>
            <filter-name>gzip</filter-name>
         </use-filters>
      </outgoing-message-handler>
   </cluster-config>
</coherence>

10.9.1.3 Configuring the Compression Filter

The compression filter includes parameters that can configure the filter's behavior. Table 10-1 describes each of the parameters that are available. See java.util.zip.Deflater for additional details.

The following example demonstrates configuring the compression filter and changes the default compression strategy and level:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <filters>
         <filter id="1">
            <filter-name>gzip</filter-name>
            <filter-class>com.tangosol.net.CompressionFilter</filter-class>
            <init-params>
               <init-param id="1">
                  <param-name>strategy</param-name>
                  <param-value>huffman-only</param-value>
               </init-param>
               <init-param id="2">
                  <param-name>level</param-name>
                  <param-value>speed</param-value>
               </init-param>
            </init-params>
         </filter>
      </filters>
   </cluster-config>
</coherence>

Table 10-1 Compression Filter Parameters

Parameter Name Description

buffer-length

Specifies compression buffer length in bytes. Legal values are positive integers or zero. The default value is 0.

level

Specifies the compression level. Legal values are:

  • default (default)

  • compression

  • speed

  • none

strategy

Specifies the compressions strategy. Legal values are:

  • gzip (default)

  • huffman-only

  • filtered

  • default


10.9.2 Using Custom Network Filters

Custom network filters can be created as required. Custom filters must implement the com.tangosol.io.WrapperStreamFactory interface. The WrapperStreamFactory interface provides the stream to be wrapped ("filtered") on input (received message) or output (sending message) and expects a stream back that wraps the original stream. These methods are called for each incoming and outgoing message. See Java API Reference for Oracle Coherence for details on these APIs.

The following topics are included in this section:

10.9.2.1 Declaring a Custom Filter

Custom filters are declared within the <filters> element in the tangosol-coherence-override.xml file. The following example demonstrates defining a custom filter named MyFilter. When declaring a custom filter, the filter id must be greater than 3 because there are three predefined filters that are declared in the operational deployment descriptor.

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <filters>
         <filter id="4">
            <filter-name>MyFilter</filter-name>
            <filter-class>package.MyFilter</filter-class>
            <init-params>
               <init-param id="1">
                  <param-name>foo</param-name>
                  <param-value>bar</param-value>
               </init-param>
            </init-params>
         </filter>
      </filters>
   </cluster-config>
</coherence>

10.9.2.2 Enabling a Custom Filter for Specific Services

To enable a custom filter for a specific service, include the <use-filters> element within the service's definition and add a <filter-name> subelement that is set to the filters name. The following example enables a custom filter called MyFilter for the Distributed Cache service. All caches that are derived from this service automatically use the filter. Coherence instantiates the filter when the service starts and holds it until the service stops.

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <services>
         <service id="3">
            <service-type>DistributedCache</service-type>
            <service-component>PartitionedService.PartitionedCache
            </service-component>
            <use-filters>
               <filter-name>MyFilter</filter-name>
            </use-filters>
         </service>
      </services>
   </cluster-config>
</coherence>

10.9.2.3 Enabling a Custom Filter for All Services

To enable a custom filter globally for all services, add the <use-filters> element within the <outgoing-message-handler> element and add a <filter-name> subelement that is set to the filter name. The following example enables a custom filter called MyFilter for all services. Coherence instantiates the filter on startup and holds it until the cluster stops.

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <outgoing-message-handler>
         <use-filters>
            <filter-name>MyFilter</filter-name>
         </use-filters>
      </outgoing-message-handler>
   </cluster-config>
</coherence>

10.10 Changing the TCMP Socket Provider Implementation

Coherence provides multiple underlying socket provider implementations (besides the default system socket provider) for use by TCMP. Socket providers for use by TCMP are configured for the unicast listener within the <unicast-listener> element.

This section includes the following topics:

10.10.1 Using the TCP Socket Provider

The TCP socket provider is a socket provider which, whenever possible, produces TCP-based sockets. This socket provider creates DatagramSocket instances which are backed by TCP. When used with the WKA feature (mulitcast disabled), TCMP functions entirely over TCP without the need for UDP.

Note:

if this socket provider is used without the WKA feature (multicast enabled), TCP is used for all unicast communications; while, multicast is utilized for group based communications.

The TCP socket provider uses up to two TCP connections between each pair of cluster members. No additional threads are added to manage the TCP traffic as it is all done using nonblocking NIO based sockets. Therefore, the existing TCMP threads handle all the connections. The connections are brought up on demand and are automatically reopened as needed if they get disconnected for any reason. Two connections are utilized because it reduces send/receive contention and noticeably improves performance. TCMP is largely unaware that it is using a reliable protocol and as such still manages guaranteed delivery and flow control.

To specify the TCP socket provider, edit the operational override file and add a <socket-provider> element that includes the tcp value. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <unicast-listener>
         <socket-provider system-property="tangosol.coherence.socketprovider">tcp
         </socket-provider>
      </unicast-listener>
   </cluster-config>
</coherence>

The tangosol.coherence.socketprovider system property is used to specify the socket provider instead of using the operational override file. For example:

-Dtangosol.coherence.socketprovider=tcp

10.10.2 Using the SDP Socket Provider

The SDP socket provider is a socket provider which, whenever possible, produces SDP-based sockets provided that the JVM and underlying network stack supports SDP. This socket provider creates DatagramSocket instances which are backed by SDP. When used with the WKA feature (mulitcast disabled), TCMP functions entirely over SDP without the need for UDP.

Note:

if this socket provider is used without the WKA feature (multicast enabled), SDP is used for all unicast communications; while, multicast is utilized for group based communications.

To specify the SDP socket provider, edit the operational override file and add a <socket-provider> element that includes the sdp value. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <unicast-listener>
         <socket-provider system-property="tangosol.coherence.socketprovider">sdp
         </socket-provider>
      </unicast-listener>
   </cluster-config>
</coherence>

The tangosol.coherence.socketprovider system property is used to specify the socket provider instead of using the operational override file. For example:

-Dtangosol.coherence.socketprovider=sdp

10.10.3 Using the SSL Socket Provider

The SSL socket provider is a socket provider which only produces SSL protected sockets. This socket provider creates DatagramSocket instances which are backed by SSL/TCP or SSL/SDP. SSL is not supported for multicast sockets; therefore, the WKA feature (multicast disabled) must be used for TCMP to function with this provider.

The default SSL configuration allows for easy configuration of two-way SSL connections, based on peer trust where every trusted peer resides within a single JKS keystore. More elaborate configuration can be defined with alternate identity and trust managers to allow for Certificate Authority trust validation. See Securing Oracle Coherence for detailed instructions on configuring and using SSL with TCMP.