EJFAT UDP Transmission Performance

From epsciwiki
Revision as of 15:19, 10 June 2022 by Timmer (talk | contribs)
Jump to navigation Jump to search

UDP Performance Overview

This page is dedicated to researching methods to maximize reliable UDP transmission rates between nodes.

CURRENTLY UNDER CONSTRUCTION!


Transmission between indra-s1 and indra-s2

The following tests were run with 1 sender and 1 receiver. The sender was the packetBlaster program whose help gives the following output:


usage: ./packetBlaster
        [-h] [-v] [-ip6] [-sendnocp]
        [-bufdelay] (delay between each buffer, not packet)
        [-host <destination host (defaults to 127.0.0.1)>]
        [-p <destination UDP port>]
        [-i <outgoing interface name (e.g. eth0, currently only used to find MTU)>]
        [-mtu <desired MTU size>]
        [-t <tick>]
        [-ver <version>]
        [-id <data id>]
        [-pro <protocol>]
        [-e <entropy>]
        [-b <buffer size>]
        [-s <UDP send buffer size>]
        [-cores <comma-separated list of cores to run on>]
        [-tpre <tick prescale (1,2, ... tick increment each buffer sent)>]
        [-dpre <delay prescale (1,2, ... if -d defined, 1 delay for every prescale pkts/bufs)>]
        [-d <delay in microsec between packets>]

        EJFAT UDP packet sender that will packetize and send buffer repeatedly and get stats
        By default, data is copied into buffer and "send()" is used (connect is called).
        Using -sendnocp flag, data is sent using "send()" (connect called) and data copy minimized, but original data buffer changed


The blaster was sending 89kB buffers in 10 packets from Indra-s1 to the load balancer (129.57.109.254 / 19522) with mtu = 9000.
The sending thread was NOT tied to any specific core. And finally, the entropy and id are the same (0):


./packetBlaster -host 129.57.109.254 -p 19522 -mtu 9000 -ver 2 -sendnocp -t 0 -id 0 -e 0 -b 89000


The receiver was the packetBlastee program whose help gives the following output:


usage: ./packetBlastee
        [-h] [-v] [-ip6]
        [-a <listening IP address (defaults to INADDR_ANY)>]
        [-p <listening UDP port>]
        [-b <internal buffer byte sizez>]
        [-r <UDP receive buffer byte size>]
        [-cores <comma-separated list of cores to run on>]
        [-tpre <tick prescale (1,2, ... expected tick increment for each buffer)>]

        This is an EJFAT UDP packet receiver made to work with packetBlaster.


The blastee was receiving on Indra-s2. Initially the receiving thread was NOT tied to any specific core.
This program is able to track the number of dropped packets and to make sure this stat is accurate,
the value given to the -dpre command line option must be identical for both sender & receiver. This
ensures that the receiver knows which tick is coming next.


./packetBlastee -p 17750


The speed of data transfer depend upon a number of factors:

  1. if the sending thread was tied to a specific core or cores
  2. if the receiving thread was tied to a specific core or cores
  3. if a linux operating system ksoftirqd thread was running and consuming significant cpu time
  4. which pair of terminals the sender & receiver are run from


The highest observed average data rate was 4.0GB/sec.


Terminals

Running 1 sender and 1 receiver at a time, of multiple terminals running on s1 and s2, some pairs produce the highest transfer rate and some do not.
How that is determined by the operating system is a mystery at this point. Sometimes, to get the highest rate, a sending/receivering pair need to run,
killed, and restarted a number of times. Again, why this is so is a mystery. If a sender/receiver are not running at 4GB/s, then they generally run at 3.3GB/s.

Sending Thread

If a sending thread was tied to a single or multiple cores, then performance diminished noticeably. The top transfer rate became 3.5GB/s.

Receiving Thread

Generally, running at the top transfer rate meant dropping packets on occasion. However, eliminating dropped packets could be achieved by tying the reassembly thread
of a receiver to a single core. This did not improve top speed but did eliminate dropped packets. Note that tying it to multiple cores always diminished performance considerably.
In the case of tying to 2 cores, the received rate drops to almost 1/2 the sending rate. The advantage of tying reassembly to a single core comes with an important caveat.
It's dependent on the ksoftirq thread.

ksoftirq Thread

ksoftirqds are kernel threads driving soft IRQs, things like NET_TX_SOFTIRQ and NET_RX_SOFTIRQ.
These are implemented separate from the actual interrupt handlers in the device drivers where latency is critical.
An actual interrupt handler, or hardware IRQ, for a network card is concerned with getting data to/from the device as quickly as possible.
It doesn't know anything about TCP/IP processing. It knows about its bus handling (say PCI), its card specifics (ring buffers, control/config registers),
DMA, and a bit about Ethernet. It handles/receives packets (skbufs to be exact) through queues to/from ksoftirq threads.

By splitting the workload the kernel doesn't try and achieve everything in the actual hardware IRQ while other IRQs are potentially masked.
If the linux kernel is > 2.4.2, NAPI should be available and one of the things a driver can support is interrupt mitigation.
If the packet rate becomes too high a NAPI driver can turn off interrupts (which are fairly expensive - one per packet) and go into a polling mode.
However the driver needs to support this.