Before continuing, it is worth checking that the network interface card is configured as expected during the initial setup of each SN, because it is harder to debug these problems later when such configuration problems show up under load.
Use the following command to determine which network interface is being used to access a particular subnet on each host. This command is particularly useful for machines with multiple NICs:
$ ip addr ls to 192.168/16 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.19/24 brd 192.168.1.255 scope global eth0
Use the following command to get information about the configuration of the NIC:
$ ethtool -i eth2 driver: enic version: 2.1.1.13 firmware-version: 2.0(2g) bus-info: 0000:0b:00.0
Use the following command to get information about the NIC hardware:
$ lspci -v | grep "Ethernet controller" 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
Use the following command to get information about the network speed. Note that this command requires sudo:
$ sudo ethtool eth0 | grep Speed Speed: 1000Mb/s
You may want to consider using 10 gigabit Ethernet, or other fast network implementations, to improve performance for large clusters.
The typical default maximum server socket backlog, typically set at 128, is too small for server style loads. It should be at least 1K for server applications and even a 10K value is not unreasonable for large stores.
Set the net.core.somaxconn
property in sysctl.conf
to
modify this value.
If the machine has multiple network interfaces, you
can configure Oracle NoSQL Database to isolate HA replication
traffic on one interface, while client request
traffic uses another interface. Use the
-hahost
parameter to the
makebootconfig
command to specify
the interface to be used by HA as in the example
below:
java -Xmx256m -Xms256m \ -jar kvstore.jar makebootconfig -root /disk1/kvroot \ -host sn10.example.com -port 5000 -harange 5010,5020 \ -storagedir /disk2/kv -hahost sn10-ha.example.com
In this example, all client requests will use the
interface associated with sn10.example.com
, while HA
traffic will use the interface associated with
sn10-ha.example.com
.
When multiple RNs are located on a machine with a single queue network device, enabling Receive Packet Steering (RPS) can help performance by distributing the CPU load associated with packet processing (soft interrupt handling) across multiple cores. Multi-queue NICs provide such support directly and do not need to have RPS enabled.
Note that this tuning advice is particularly appropriate for customers using Oracle Big Data Appliance.
You can determine whether a NIC is multi-queue by using the following command:
sudo ethtool -S eth0
A multi-queue NIC will have entries like this:
rx_queue_0_packets: 271623830 rx_queue_0_bytes: 186279293607 rx_queue_0_drops: 0 rx_queue_0_csum_err: 0 rx_queue_0_alloc_failed: 0 rx_queue_1_packets: 273350226 rx_queue_1_bytes: 188068352235 rx_queue_1_drops: 0 rx_queue_1_csum_err: 0 rx_queue_1_alloc_failed: 0 rx_queue_2_packets: 411500226 rx_queue_2_bytes: 206830029846 rx_queue_2_drops: 0 rx_queue_2_csum_err: 0 rx_queue_2_alloc_failed: 0 ...
For a For a 32 core Big Data Appliance using Infiniband, use the following configuration to distribute receive packet processing across all 32 cores:
echo ffffffff > /sys/class/net/eth0/queues/rx-0/rps_cpus
where ffffffff
is a bit mask selecting all 32 cores.
For more information on RPS please consult: