2 Performing a Network Performance Test

Coherence provides a datagram utility and a message bus utility for testing network performance between two or more computers. Any production deployment should be preceded by a successful run of both tests.

This chapter includes the following sections:

2.1 Using the Datagram Test Utility

The Coherence datagram test utility is used to test and tune network performance between two or more computers. The utility ensures that a network is optimally configured to support Coherence cluster management communication. There are two types of tests: a point-to-point test that tests the performance of a pair of servers to ensure they are properly configured, and a distributed datagram test to ensure the network itself is functioning properly. Both tests need to be run successfully.

The datagram test operates in one of three modes: either as a packet publisher, a packet listener, or both. When the utility is run, a publisher transmits packets to the listener who then measures the throughput, success rate, and other statistics. Tune an environment based on the results of these tests to achieve maximum performance. See Performance Tuning.

This section includes the following topics:

2.1.1 Running the Datagram Test Utility

The datagram test utility is run from the command line using either the com.tangosol.net.DatagramTest class or by running the datagram-test script that is provided in the COHERENCE_HOME/bin directory. A script is provided for both Windows and UNIX-based platforms.

The following example demonstrates using the DatagramTest class:

java -server -cp coherence.jar com.tangosol.net.DatagramTest <command value> <command value> ...

The following example demonstrates using the script:

datagram-test <command value> <command value> ...

Table 2-1 describes the available command line options for the datagram test utility.

Table 2-1 Command Line Options for the Datagram Test Utility

Command Required/ Optional Applicability Description Default

-local

Optional

Both

The local address to bind to, specified as addr:port

localhost:9999

-packetSize

Optional

Both

The size of packet to work with, specified in bytes.

1468

-payload

Optional

Both

The amount of data to include in each packet. Use 0 to match packet size.

0

-processBytes

Optional

Both

The number of bytes (in multiples of 4) of each packet to process.

4

-rxBufferSize

Optional

Listener

The size of the receive buffer, specified in packets.

1428

-rxTimeoutMs

Optional

Listener

The duration of inactivity before a connection is closed.

1000

-txBufferSize

Optional

Publisher

The size of the transmit buffer, specified in packets.

16

-txRate

Optional

Publisher

The rate at which to transmit data, specified in megabytes.

unlimited

-txIterations

Optional

Publisher

Specifies the number of packets to publish before exiting.

unlimited

-txDurationMs

Optional

Publisher

Specifies how long to publish before exiting.

unlimited

-reportInterval

Optional

Both

The interval at which to output a report, specified in packets.

100000

-tickInterval

Optional

Both

The interval at which to output tick marks.

1000

-log

Optional

Listener

The name of a file to save a tabular report of measured performance.

none

-logInterval

Optional

Listener

The interval at which to output a measurement to the log.

100000

-polite

Optional

Publisher

Switch indicating if the publisher should wait for the listener to be contacted before publishing.

off

-provider

Optional

Both

The socket provider to use (system, tcp, ssl, file:xxx.xml)

system

arguments

Optional

Publisher

Space separated list of addresses to publish to, specified as addr:port.

none

2.1.2 How to Test Datagram Network Performance

This section includes instructions for running a point-to-point datagram test and a distributed datagram test. Both tests must be run successfully and show no significant performance issues or packet loss. See Understanding Datagram Report Statistics.

This section includes the following topics:

2.1.2.1 Performing a Point-to-Point Datagram Test

The example in this section demonstrates how to test network performance between two servers— Server A with IP address 195.0.0.1 and Server B with IP address 195.0.0.2. One server acts as a packet publisher and the other as a packet listener. The publisher transmits packets as fast as possible and the listener measures and reports performance statistics.

First, start the listener on Server A. For example:

datagram-test.sh

After pressing ENTER, the utility displays that it is ready to receive packets. Example 2-1 illustrates sample output.

Example 2-1 Output from Starting a Listener

starting listener: at /195.0.0.1:9999
packet size: 1468 bytes
buffer size: 1428 packets
  report on: 100000 packets, 139 MBs
    process: 4 bytes/packet
        log: null
     log on: 139 MBs

The test, by default, tries to allocate a network receive buffer large enough to hold 1428 packets, or about 2 MB. The utility reports an error and exits if it cannot allocate this buffer. Either decrease the requested buffer size using the -rxBufferSize parameter, or increase the operating system's network buffer settings. Increase the operating system buffers for the best performance. See Production Checklist.

Start the publisher on Server B and direct it to publish to Server A. For example:

datagram-test.sh servera

After pressing ENTER, the test instance on Server B starts both a listener and a publisher. However, the listener is not used in this configuration. Example 2-2 demonstrates the sample output that displays in the Server B command window.

Example 2-2 Datagram Test—Starting a Listener and a Publisher on a Server

starting listener: at /195.0.0.2:9999
packet size: 1468 bytes
buffer size: 1428 packets
  report on: 100000 packets, 139 MBs
    process: 4 bytes/packet
        log: null
     log on: 139 MBs

starting publisher: at /195.0.0.2:9999 sending to servera/195.0.0.1:9999
packet size: 1468 bytes
buffer size: 16 packets
  report on: 100000 packets, 139 MBs
    process: 4 bytes/packet
      peers: 1
       rate: no limit

no packet burst limit
oooooooooOoooooooooOoooooooooOoooooooooOoooooooooOoooooooooOoooooooooOoooooooooO

The series of o and O marks appear as data is (O)utput on the network. Each o represents 1000 packets, with O indicators at every 10,000 packets.

On Server A, a corresponding set of i and I marks, representing network (I)nput. This indicates that the two test instances are communicating.

2.1.2.2 Performing a Bidirectional Datagram Test

The point-to-point test can also be run in bidirectional mode where servers act as publishers and listeners. Use the same test instances that were used in the point-to-point test and supply the instance on Server A with the address for Server B. For example on Server A run:

datagram-test.sh -polite serverb

The -polite parameter instructs this test instance to not start publishing until it starts to receive data. Run the same command as before on Server B.

datagram-test.sh servera

2.1.2.3 Performing a Distributed Datagram Test

A distributed test is used to test performance with more than two computers. For example, setup two publishers to target a single listener. This style of testing is far more realistic then simple one-to-one testing and may identify network bottlenecks that may not otherwise be apparent.

The following example runs the datagram test among 4 computers:

On Server A:

datagramtest.sh -txRate 100 -polite serverb serverc serverd

On Server B:

datagramtest.sh -txRate 100 -polite servera serverc serverd

On Server C:

datagramtest.sh -txRate 100 -polite servera serverb serverd

On Server D:

datagramtest.sh -txRate 100 servera serverb serverc

This test sequence causes all nodes to send a total of 100MB per second to all other nodes (that is, 33MB/node/second). On a fully switched 1GbE network this should be achievable without packet loss.

To simplify the execution of the test, all nodes can be started with an identical target list, they obviously transmit to themselves as well, but this loopback data can easily be factored out. It is important to start all but the last node using the -polite switch, as this causes all other nodes to delay testing until the final node is started.

2.1.3 Understanding Datagram Report Statistics

Each side of the test (publisher and listener) periodically report performance statistics. The publisher simply reports the rate at which it is publishing data on the network. For example:

Tx summary 1 peers:
   life: 97 MB/sec, 69642 packets/sec
    now: 98 MB/sec, 69735 packets/sec

The report includes both the current transmit rate (since last report) and the lifetime transmit rate.

Table 2-2 describes the statistics that can be reported by the listener.

Table 2-2 Listener Statistics

Element Description

Elapsed

The time interval that the report covers.

Packet size

The received packet size.

Throughput

The rate at which packets are being received.

Received

The number of packets received.

Missing

The number of packets which were detected as lost.

Success rate

The percentage of received packets out of the total packets sent.

Out of order

The number of packets which arrived out of order.

Average offset

An indicator of how out of order packets are.

As with the publisher, both current and lifetime statistics are reported. The following example demonstrates a typical listener report:

Lifetime:
Rx from publisher: /195.0.0.2:9999
             elapsed: 8770ms
         packet size: 1468
          throughput: 96 MB/sec
                      68415 packets/sec
            received: 600000 of 611400
             missing: 11400
        success rate: 0.9813543
        out of order: 2
          avg offset: 1


Now:
Rx from publisher: /195.0.0.2:9999
             elapsed: 1431ms
         packet size: 1468
          throughput: 98 MB/sec
                      69881 packets/sec
            received: 100000 of 100000
             missing: 0
        success rate: 1.0
        out of order: 0
          avg offset: 0

The primary items of interest are the throughput and success rate. The goal is to find the highest throughput while maintaining a success rate as close to 1.0 as possible. A rate of around 10 MB/second should be achieved on a 100 Mb network setup. A rate of around 100 MB/second should be achieved on a 1 Gb network setup. Achieving these rates require some throttle tuning. If the network cannot achieve these rates or if the rates are considerably less, then it is very possible that there are network configuration issues. See Network Tuning.

Throttling

The publishing side of the test may be throttled to a specific data rate expressed in megabytes per second by including the -txRate M parameter, when M represents the maximum MB/second the test should put on the network.

2.2 Using the Message Bus Test Utility

The Coherence message bus test utility is used to test the performance characteristics of message bus implementations and the network on which they operate. The utility ensures that a network is optimally configured to support communication between clustered data services. In particular, the utility can be used to test the TCP message bus (TMB) implementation, which is the default transport for non-exalogic systems and the Infiniband message bus (IMB) implementation, which is the default transport on Exalogic systems. Tune your environment based on the results of these tests to achieve maximum performance. See TCP Considerations.

This section includes the following topics:

2.2.1 Running the Message Bus Test Utility

The message bus test utility is run from the command line using the com.oracle.common.net.exabus.util.MessageBusTest class. The following example demonstrates using the MessageBusTest class:

java -server -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest <command value> <command value> ...

Table 2-3 describes the available command line options for the message bus test utility.

Table 2-3 Command Line Options for the Message Bus Test Utility

Command Required/ Optional Description Default

-bind

Required

List of one or more local end points to create

none

-peer

Required

List of one or more remote end points to send to

none

-rxThreads

Optional

Number of receive threads per bound EndPoint (negative for reentrant)

 

-txThreads

Optional

Number of transmit threads per bound EndPoint

 

-msgSize

Optional

Range of message sizes to send, expressed as min[..max]

4096

-chunkSize

Optional

Defines the number of bytes to process as a single unit; that is, 1 for byte, 8 for long, and 0 to disable

 

-cached

Optional

Re-use message objects where possible, reducing buffer manager overhead

 

-txRate

Optional

Target outbound data rate as MBps

 

-txMaxBacklog

Optional

The maximum backlog the test should produce per tx thread.

 

-rxRate

Optional

Target inbound data rate as MBps. Cannot be used if -rxThreads is less than or equal to 0.

 

-flushFreq

Optional

Number of messages to send before flushing, or 0 for auto

0

-latencyFreq

Optional

Number of messages to send before sampling latency

100

-noReceipts

Optional

If specified, then receipts should not be used, relies on GC to reclaim messages

false

-manager

Optional

Buffer manager to utilize (net, direct, or heap)

net

-depotFactory

Optional

The fully qualified class name of a factory to use to obtain a Depot instance

 

-reportInterval

Optional

The report interval

5 seconds

-polite

Optional

If specified, then this instance does not start sending until connected to

 

-block

Optional

If specified, then a transmit thread blocks while awaiting a response

false

-relay

Optional

If specified, then the process relays any received messages to one of its peers

false

-ignoreFlowControl

Optional

If specified, then flow control events are ignored. If flow control events are to be ignored, use the -txMaxBacklog command to prevent out of memory errors

false

-poll

Optional

If specified, then a PollingEventCollector implementation is used that queues all events and returns them only when they are for. A polling collector generally requires the -rxThreads command set to 1.

 

-prompt

Optional

If specified, then the user is prompted before each send

 

-tabular

Optional

If specified, then use tabular format for the output

 

-warmup

Optional

Time duration or message count that are discarded for warmup

0

-verbose

Optional

If specified, then enable verbose debugging output

 

2.2.2 How to Test Message Bus Performance

This section includes instructions for running a point-to-point message bus test and a distributed message bus test for the TMB transport. Both tests must be run successfully and show no significant performance issues or errors.

This section includes the following topics:

2.2.2.1 Performing a Point-to-Point Message Bus Test

The example in this section demonstrates how to test network performance between two servers— Server A with IP address 195.0.0.1 and Server B with IP address 195.0.0.2. Server A acts as a server and Server B acts as a client.

First, start the listener on Server A. For example:

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://servera:8000

After pressing ENTER, the utility displays that it is ready to receive messages. Example 2-3 illustrates sample output.

Example 2-3 Output from Starting a Server Listener

OPEN event for tmb://195.0.0.1:8000

Start the client on Server B and direct it to send messages to Server A. For example:

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://serverb:8000 -peer tmb://servera:8000

The test instance on Server B starts both a client and a server listener. The message bus test always performs bi-directional communication. In it's default mode the client sends an endless stream of messages to the server, and the server periodically replies to the client. In this configuration most communication is client to server, while the occasional server to client communication allows for latency measurements. Example 2-4 demonstrates the sample output that displays in the Server B command window.

Note:

The performance results in Example 2-4 may not be indicative of your network environment.

Example 2-4 Message Bus Test—Starting a Client and Server

OPEN event for tmb://195.0.0.2:8001
CONNECT event for tmb://195.0.0.1:8000 on tmb://195.0.0.2:8001
now:  throughput(out 65426msg/s 2.14gb/s, in 654msg/s 21.4mb/s),
   latency(response(avg 810.71us, effective 1.40ms, min 37 .89us, max 19.59ms),
   receipt 809.61us), backlog(out 42% 1556/s 48KB, in 0% 0/s 0B), connections 1,
   errors 0
life: throughput(out 59431msg/s 1.94gb/s, in 594msg/s 19.4mb/s),
   latency(response(avg 2.12ms, effective 3.85ms, min 36.32us, max 457.25ms),
   receipt 2.12ms), backlog(out 45% 1497/s 449KB, in 0% 0/s 0B), connections 1,
   errors 0

The test, by default, tries to use the maximum bandwidth to push the maximum amount of messages, which results in increased latency. Use the -block command to switch the test from streaming data to request and response, which provides a better representation of the network minimum latency:

now:  throughput(out 17819msg/s 583mb/s, in 17820msg/s 583mb/s),
   latency(response(avg 51.06us, effective 51.06us, min 43.42us, max 143.68us),
   receipt 53.36us), backlog(out 0% 0/s 0B, in 0% 0/s 0B), connections 1, errors 0
life: throughput(out 16635msg/s 545mb/s, in 16635msg/s 545mb/s),
   latency(response(avg 56.49us, effective 56.49us, min 43.03us, max 13.91ms),
   receipt 59.43us), backlog(out 0% 0/s 2.18KB, in 0% 0/s 744B), connections 1,
   errors 0

2.2.2.2 Performing a Bidirectional Message Bus Test

The point-to-point test can also be run in bidirectional mode where servers act as both client and servers. Use the same test instances that were used in the point-to-point test. For example on Server A run:

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://servera:8000 -peer tmb://serverb:8000 -polite

The -polite parameter instructs this test instance to not start publishing until it starts to receive data. On Server B run.

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://serverb:8000 -peer tmb://servera:8000

2.2.2.3 Performing a Distributed Message Bus Test

A distributed test is used to test performance with more than two computers. This style of testing is far more realistic then simple one-to-one testing and may identify network bottlenecks that may not otherwise be apparent.

The following example runs a bidirectional message bus test among 4 computers:

On Server A:

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://servera:8000 -peer tmb://serverb:8000 tmb://serverc:8000 tmb://serverd:8000 -polite

On Server B:

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://serverb:8000 -peer tmb://servera:8000 tmb://serverc:8000 tmb://serverd:8000 -polite

On Server C:

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://serverc:8000 -peer tmb://servera:8000 tmb://serverb:8000 tmb://serverd:8000 -polite

On Server D:

java -cp coherence.jar com.oracle.common.net.exabus.util.MessageBusTest -bind tmb://serverd:8000 -peer tmb://servera:8000 tmb://serverb:8000 tmb://serverc:8000 -polite

It is important to start all but the last node using the -polite switch, as this causes all other nodes to delay testing until the final node is started.

2.2.3 Understanding Message Bus Report Statistics

Each side of the message bus test (client and server) periodically report performance statistics. The example output is from the client sending the requests:

throughput(out 17819msg/s 583mb/s, in 17820msg/s 583mb/s),
   latency(response(avg 51.06us, effective 51.06us, min 43.42us, max 143.68us),
   receipt 53.36us), backlog(out 0% 0/s 0B, in 0% 0/s 0B), connections 1, errors 0

The report includes both statistics since the last report (now:) and the aggregate lifetime statistics (life:).

Table 2-2 describes the message bus statistics.

Table 2-4 Message Bus Statistics

Element Description

throughput

The amount of messages per second being sent and received and the transmission rate

latency

The time spent for message response and receipt

backlog

the number of messages waiting to be sent and to be processed

connections

The number of open connections between message listeners

errors

The number of messages which were detected as lost.

The primary item of interest are throughput and latency. The goal should be to utilize as much network bandwidth as possible without resulting in high latencies. If bandwidth usage is low or latencies are high, consider tuning TCP settings. A high backlog or error rate can also indicate network configuration issues. See Network Tuning.