Iperf3 maxes at 2 Gbps on PC router box - dual AQN-107 NICs

bufferbloat is a receive queue issue and not a transmit queue issue. If your transmit queue is set too high its just increases latency. Too low, limits the bytes transmitted. So its adjusted to give the best throughput at the best latency.

The only issues is when you work with high speeds over 5Gb, you have to look at if there is too many empty bytes in the packets which there is adjustments that can be declared for that in sysctrl.conf but there is formulas for calculating this too.

Bufferfloat is a problem with queues.
Queues originate in situation where the input rate is higher than the output rate.
This can occur both on the TX side and the RX side of a NIC/interface.
The queue management of IPFire tries to hold the (HW) queues in the NIC short. This reduces the latency in the NIC transition, especially for packets with high priority. Placing a high prio packet at the end of a queue with 1000 packets lowers the priority. The packet must wait for the transmission of the leading 1000 packets. A queue length of 10 means a maximal latency of the transmit time for 10 packets, 1% of the time with a 1000 packet queue.

I just did a quick look.

The tcp_mem setting is very generically declaring the same size, but this can differ with how much memory you got to play with buffers. 170M is not bad but increasing this past 170M would have to pick a different scaling variable.

The tcp-collapse_max_bytes is not declared, so its relying on vanilla kernel logic to set this. But a lot of times its not very efficient at this. So I would set it at 6M :
net.ipv4.tcp_collapse_max_bytes = 6291456

tcp_notsent_lowat is not declared either, so its default is 1Gb. So I will declare a nosent fragmentation size of 135Kb so there is a lower wait state to send.

net.ipv4.tcp_notsent_lowat = 131072

This is the things that stand out with a quick glance, and what I would look at trying.

I am honestly starting to think we are dealing with an LLM carefully tuned to sound natural but still not really making sense. All their posts are like that.

2 Likes

But getting back on topic, You should put the two nic cards in the pcie3 slots and the video in the pcie2 slot Because you want to remove the instance of pcie3 to pcie2 bridge that is an internal bottleneck.

Btw, its not a really good motherboard to do this with. It should be PCIE3 or better.

Also, I just want to say, I never seen full bandwidth when these 10G cards are connected to a pcie2.0 slot.

Who are you talking to here, Dave? Perhaps you didn’t realise, but it was you who necro-bumped this thread from over a year ago.

As noted in the FAQs, this community forum should be treated like a public park. Right now, you’re the guy in the corner revisiting old conversations without adding anything meaningful, and some of the advice provided appears questionable.

As such, I’m closing this thread.

Thanks,
A G

3 Likes