EJFAT UDP General Performance Considerations

From epsciwiki
Revision as of 22:04, 21 December 2023 by Timmer (talk | contribs)
Jump to navigation Jump to search

Here are a few things to ponder. I'll go over some things I've done to try to speed up performance so that those who follow won't waste their time. Here are some interesting links:

Scaling in the Linux Networking Stack
NIC packet reception
How to receive 1M pkts/sec

NIC queues on multi-cpu nodes

Contemporary NICs support multiple receive and transmit descriptor queues (Receive Side Scaling or RSS). On reception a NIC distributes packets by applying a filter to each that assigns it to one of a number of logical flows. Packets for each flow are steered to a separate receive queue, which in turn can be processed by a separate CPU. The goal of this is to increase performance.
The filter used is typically a hash function over the network and/or transport layer headers. Typically and for ejfat nodes this is a 4-tuple hash over IP addresses and ports of a packet. The most common implementation uses an indirection table (256 entries for ejfat nodes) where each entry stores a queue number. The receive queue for a packet is determined by masking out the low order seven bits of the computed hash for the packet (usually a Toeplitz hash), taking this number as a key into the indirection table and reading the corresponding value.
// See if hashing is enabled
sudo ethtool -k enp193s0f1np1 | grep hashing

// Print out the indirection table to see how packets are distributed to Qs
sudo ethtool -x enp193s0f1np1
It's also possible to steer packets to queues based on other selectable filters. To find out which filter is being used:
// Change hashing to only destination port (slows things down if using 63 queues)
sudo ethtool -N enp193s0f1np1 rx-flow-hash n

// Change hashing to back to 4-tuple
sudo ethtool -N enp193s0f1np1 rx-flow-hash sdfn

// See how packets are distributed to Qs
sudo ethtool -x enp193s0f1np1

// See the details of the hash algorithm
sudo ethtool -n enp193s0f1np1 rx-flow-hash udp4
Contemporary NICs support multiple receive and transmit descriptor queues. On reception, a NIC can send different packets to different queues to distribute processing among CPUs. Find out how many NIC queues there are on your node by looking at the combined property:
// See how many queues there are 
sudo ethtool -l enp193s0f1np1

Effect of NIC queues on UDP transmission

In the case of ejfat nodes, there are a max of 63 queues even though there are 128 cores. It seems odd to me that there isn't 1 queue per cpu, and it does not appear to be changeable so most likely it's built into the kernel when first created.


Jumbo Frames