Linux Network Configuration Settings

Before continuing, it is worth checking that the network interface card is configured as expected during the initial setup of each SN, because it is harder to debug these problems later when such configuration problems show up under load.

Use the following command to determine which network interface is being used to access a particular subnet on each host. This command is particularly useful for machines with multiple NICs:

$ ip addr ls to 192.168/16
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
    state UP qlen 1000
    inet 192.168.1.19/24 brd 192.168.1.255 scope global eth0 

Use the following command to get information about the configuration of the NIC:

$ ethtool -i eth2
driver: enic
version: 2.1.1.13
firmware-version: 2.0(2g)
bus-info: 0000:0b:00.0 

Use the following command to get information about the NIC hardware:

$ lspci -v | grep "Ethernet controller"
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit 
Ethernet Controller (rev 02) 

Use the following command to get information about the network speed. Note that this command requires sudo:

$ sudo ethtool eth0 | grep Speed
    Speed: 1000Mb/s 

You may want to consider using 10 gigabit Ethernet, or other fast network implementations, to improve performance for large clusters.

Server Socket Backlog

The typical default maximum server socket backlog, typically set at 128, is too small for server style loads. It should be at least 1K for server applications and even a 10K value is not unreasonable for large stores.

Set the net.core.somaxconn property in sysctl.conf to modify this value.

Isolating HA Network Traffic

If the machine has multiple network interfaces, you can configure Oracle NoSQL Database to isolate HA replication traffic on one interface, while client request traffic uses another interface. Use the -hahost parameter of the makebootconfig command to specify the interface to be used by HA as in the example below:

java -Xmx64m -Xms64m \
-jar kvstore.jar makebootconfig -root /disk1/kvroot \
-host sn10.example.com -port 5000 -harange 5010,5020 \
-admindir /disk2/admin -admindirsize 2 GB
-storagedir /disk2/kv -hahost sn10-ha.example.com

In this example, all client requests will use the interface associated with sn10.example.com, while HA traffic will use the interface associated with sn10-ha.example.com.

Receive Packet Steering

When multiple RNs are located on a machine with a single queue network device, enabling Receive Packet Steering (RPS) can help performance by distributing the CPU load associated with packet processing (soft interrupt handling) across multiple cores. Multi-queue NICs provide such support directly and do not need to have RPS enabled.

Note that this tuning advice is particularly appropriate for customers using Oracle Big Data Appliance.

You can determine whether a NIC is multi-queue by using the following command:

sudo ethtool -S eth0

A multi-queue NIC will have entries like this:

 rx_queue_0_packets: 271623830
     rx_queue_0_bytes: 186279293607
     rx_queue_0_drops: 0
     rx_queue_0_csum_err: 0
     rx_queue_0_alloc_failed: 0
     rx_queue_1_packets: 273350226
     rx_queue_1_bytes: 188068352235
     rx_queue_1_drops: 0
     rx_queue_1_csum_err: 0
     rx_queue_1_alloc_failed: 0
     rx_queue_2_packets: 411500226
     rx_queue_2_bytes: 206830029846
     rx_queue_2_drops: 0
     rx_queue_2_csum_err: 0
     rx_queue_2_alloc_failed: 0
... 

For a 32 core Big Data Appliance using Infiniband, use the following configuration to distribute receive packet processing across all 32 cores:

echo ffffffff > /sys/class/net/eth0/queues/rx-0/rps_cpus

where ffffffff is a bit mask selecting all 32 cores.

For more information on RPS please consult:

  1. About the Unbreakable Enterprise Kernel

  2. Receive packet steering

MTU Size

When using machines connected to networks running at 1000Mb/s or higher speeds, it is recommended that you enable jumbo frames on the machines that are hosting the RNs. HA replication benefits from the use of Jumbo frames such that the feeder (via the HA parameter: feederBatchBuffKb) uses a default batch buffer size of 8K, which is well matched to use a Jumbo frame.

Setting the MTU to 9000 is also helps in improving network performance on KV client machines with high speed networks, especially if the request or response payloads frequently exceed the default MTU size of 1500.

To enable jumbo frames, set the MTU to 9000 on each machine. Also, verify that this MTU is supported on the entire network path between machines hosting the RNs.

For example, to determine the speed of the ens3 interface, use the following command:
# ip link show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:00:17:01:2c:b6 brd ff:ff:ff:ff:ff:ff
If required, change the MTU configuration using the ip command:
# ip link set ens3 mtu 9000