Slow Speed Red <-> Green

Hello,

i have an apu4c but i need to change it because of crazy coil whine
so i bought this little guy: acemagic t8 plus

i installed ipfire on the system and recovered my configuration with the web option.
everything looks good, all rules and settings are present. system is working as it should… except for one important part. the speed for my clients on green to the internet is very slow. around 60-80Mbits/sec

i used iperf3 to test my network.
ipfire system → internet iperf server, stable 935 Mbits/sec
green client → ipfire system 938 Mbits/sec

so the connection to my ipfire system is fine, also the connection from my ipfire system to the internet (1GBit FTTH)

i have no IPS active. could someone please help me an point me to the right direction

thanks a lot

edit: i also did a complete reinstallation on a proxmox system.
same problem, atm i only have the proxmox system with ipfire live, here is my profile fireinfo.ipfire.org - Profile 9d265280f0b55752889017de273587f34de7cd72
iperf3 green client to internet gives me a max 3% cpu load

edit2: i did a little bit more testing and some more restarts. it looks like its a iperf3 problem. when i download files from my ipfire with wget -O /dev/null -o /dev/null https://speedtest.init7.net/32GB.dd i get 910-920 Mbits/sec with a cpu load on one core at around 30%
if i do the same from one green client, i get around 740 Mbits/sec and one cpu core goes up to 100% with 3.4ghz
looks like the cpu is a bottleneck here, but thats a “new” intel cpu and i wonder that this cpu should have less power then my old AMD GX-412TC from the APU
Intel Processor N95 vs AMD GX-412HC : https://www.cpu-monkey.com/en/compare_cpu-intel_processor_n95-vs-amd_gx_412hc

Try without proxmox.

image
3 cores?
https://www.cpubenchmark.net/compare/5206vs5050/Intel-N95-vs-AMD-GX-412TC-SOC
The CPU is far more powerful than the old AMD SoC, should be fast, however…

Acemagic do not specify the adapters. And not all NICs can fulfill the maximum theoretical throughtput.

apu4c usually fulfill.

It is not the cpu that is the problem but the network controller chip on your motherboard. It is only using one core of your cpu.

On the APU4 the network controller chip distributes any network traffic across multiple up to all 4 cores of the cpu as required.

The i211 network controller of the APU4 has two queues, unlike the i210 in the APU2 which has four. I’m wondering if it’s possible to offload these two queues across all four cores of the CPU. Could one core handle one end of the queue, therefore collectively distribute the load on all four cores? I asked chatGPT, which according to the prompt can relay information on the topic with a variable degree of reliability. In this case, I tend to believe its answer, which follows below.


As for your question, whether two queues can be offloaded onto four cores generally depends on the architecture and the network stack of the operating system. Typically, multiple cores can service a single queue, but the efficiency of this operation varies. In some systems, each queue is tied to a specific core for better performance, but that’s not a strict rule.

In a Linux-based system like IPFire, it’s possible to distribute the processing of network packets across multiple cores through features like Receive Side Scaling (RSS). However, the i211 network controller has only two hardware queues. While the operating system can distribute the work across all available cores, the hardware limitation of two queues may pose a bottleneck.

Simply put, you can use all four cores with an i211, but you might not fully leverage the parallel processing capabilities that you would get with a network controller that supports more queues.

Hello @garog, welcome to the community and well done on identifying your bottleneck.

Here’s a summary of key considerations in choosing the firewall hardware for achieving network traffic speeds and avoiding potential bottlenecks:

To aim for a throughput of at least 2.4 Gbit/sec on an IPFire firewall, a multi-core CPU with a minimum clock speed of 1.5 GHz is advisable. PCIe version 3 or newer is recommended for adequate bandwidth. Network cards with RSS support can help in making better use of multi-core CPUs. While having more hardware queues can potentially improve parallelism, the number of queues doesn’t have to match the number of CPU cores. In general, hardware specifications offer a guideline, not a guarantee, for performance.

Actual performance may vary and should be confirmed through real-world testing. We should keep documenting our findings with discussions like this to help the community in selecting good hardware.

1 Like

that was my first attempt, and because i had this problems, i tried it with proxmox, just because i wanted to try proxmox.

_pike_it yes, i had one core reserved to another proxomox container. but i also tried it with 4 cores. in “host” mode, in kvm64 mode and in x86_64_V2_aes (or like that) mode, always the same problem

_bonnietwin _cfusco _cfusco thanks for the info. i have the APU.4C4 system board 4GB
the acemagic is using realtec nics, here is my system info without proxmox
profile/c714da38b28fc0bc7d9b517f31b78767541fad7f (cant post more links)
i know that the intel nics are a little bit better, but i dont care about the 2-5mbits/s

i reinstalled the system again today, this time i used the build in nvme drive instead of ipfire on a usb stick. and what should i say, its working now. 1gbit/s from green client to internet works with around 938 Mbits/sec. for iperf3 and downloading a file.
i have the exact same setup, restored my saved settings from my apu firewall.

no idea why the installation on the usb stick was the bottleneck… its working now as expected. cpu goes üb to 70-80% on one core while downloading with full speed.

thank you all for your answers
(sry, all the @ are not working… 2 links limit)

That irrelevant detail is missing in the first post

1 Like