New IPFire user here (since the ‘nice’ Netgate announcement 2 days ago)!
My already working 10GE network is accessing Internet at 3Gbps since the last year I and never had issue with pfsense that was my firewall.
Since I’ve setup IPFire on the same hardware (i7-8700k /w 32GB RAM, 1 x 10GE Intel X540 copper to the ISP router, and 1 x Chelsio 10GE dual-port fiber to LAN side 10GE switch), I can barely reach 2 Gbps in download (maxing 1 cpu core at 100% according to top on IPFire - that is too high load for just 3Gbps for that cpu), while the upload is able to max at the prescribe 3Gbps (1 core at 50%ish).
To me, there is clearly some limitation in how IPFire operates. Is NATing on IPFire a huge hit on performance?
iperf3 speed between my workstation and IPFire is a solid 9.41Gbps up and down.
Speed from IPFire to my ISP is a solid 3.2Gbps and down.
I will conduct more tests because this behavior is not normal.
While IPFire is based on Linux, and NAT on Linux has indeed been optimized over time, your speed issue might not necessarily stem from NAT itself. It seems more likely that the root of the problem is related to the network card driver support within Linux. The behavior you’re describing, especially with one CPU core maxing out, suggests a lack of multi-queue support for your network card (Receive-Side Scaling, RSS). It might be worth checking if the driver in use supports multiple queues to distribute the load more efficiently across cores.
It would surprised me a lot that this would be a driver issue when direct transfer from both NICs to their end-points has no issue. Other firewalls (OPNsense to name it) also struggles to keeps the speed up on the same hardware, while a Forgate VM using 1 cpu only doesn’t (inside a virtualized environment over that!).
I have pushed the threshold even more on IPFire by adding a blocked-list of IPs and the performance dived for another 300ish Mbps - that’s not normal at all for ajust block list. Either there are optimization I am not aware of in IPFire that I have to do or there is something else I also am not aware.
I reverted back to pfsense and things are back to normal for me, so I haven’t push my investigation any further.
I might try IPFire again, but I must say the lack of control on things I am accustomed to have with firewalls in general is not what I am looking for.
The major thing I really don’t like is the “green/red/blue/orange” zone thing - it seems fine on the surface for people who have no idea what they are doing, but that is not my case and I need a more advanced interface where you can do everything in the GUI.
The thing that puzzles me the most for a firewall based on Linux is the fact you need to reboot to change how your interfaces behave. I don’t understand how the mechanic behind has been implemented over the IPTables that IPFire seems to used as firewall rules, but I suspect this (implementation) is what is causing the performance drop I mentioned. NATing and and IP block lists on today’s hardware are not something that really impact bandwidth.
This is not anything I have experienced. I use IP Blocking and I do not see the drops you experience. I’ll test to make sure.
I am not sure I understand the NATing issue you experienced so I cannot comment
I know you’ve moved back to pfsense but if you do try ipfire again, it would be good to collect the details. We are always looking to make things better and we would like to learn from the issues you saw.
Did you really use the pfSense Plus Home+Lab? That was just the free version of the propietay pfSense Plus. Why don’t you use pfSense Community Edition which is still free and open source. Sure they will scrub that one too one day to sell their propietay products but until now it is free.
But from my perspective it would be more logical to move to OPNsense in your position because both are rooted in m0n0wall. And are you aware that you won’t be able to use IPv6 in IPFire 2.x, but if you haven’t noticed you probably aren’t using it anyway.
Yes I was and still using pfsense+ as this is the “real” pfsense in a sense. Not that I don’t recommend CE, but if you want to stick to what pfsense has always been, the + version is that. Also, I have a Netgate appliance (now retired) so I have the licence on that one for free also.
As for IPv6, no one really need to use this - as like 95% of entrerprises - for internal networking. So, I didn’t know about that for IPFire. If we are oneday forced to use IPv6 on the public side, well I’ll adjust things that day… I am still waiting for that day since 1996 and it will probably never come in my lifetime.
For OPNsense, no thx I will pass. I gave it a try 4 years ago, and from a professional networking guy standpoint (me), I don’t trust that ‘flavor’ of m0n0wall as it is subpar to pfsense.
I found another user with bad performance on IPFire while using PPPoE. I was suprised because this is usually an issue with FreeBSD and not Linux. Either OpenWrt does have a better implementation of the PPPoE protocol or some RSS/interrupt issues.
My WAN is configured for PPPoE to get my public IP. The MTU was adjusted on the WAN at the same value (1464) has it was (and is) on pfsense. I know PPPoE causes some performance issues on low level hardware (Armv7/v8 or low clock Atoms), but it never did with this hardware.
They are still using version 2.4.9 in their “main” git branch.
Version 2.5.0 was released in IPFire Core Update 179 and during the Testing release testing was carried out by a few people who use PPPoE and a couple of issues were identified and fixed.
After releasing in Core Update 179 a problem was found by other users that a different directory was now used. This has been fixed as a bug by the ppp developers and version 2.5.1 should be available soon with those fixes and will then be submitted into IPFire. The users affected by the bug were able to workaround by manually creating the required directory.
None of the people who have tested out the ppp-2.5.0 version have complained about the speed until this post.
Version 2.5.0 did have some significant changes to it so there might be impacts depending on exactly how it is used. It could always be flagged up as a bug in the ppp git repo.
Could the people affected by this issue try turning QoS off (if enabled) and see if that has remedied the download issues?
@bonnietwin QoS seems to be the limiting factor for me. When I run a speed test with the preset rules in QoS, I am CPU bottlenecked by the kernel thread ‘ksoftirqd’ (interrupts). However, with QoS off, ksoftirqd doesn’t consume a lot of CPU. I also think this might also just be affecting download speeds, as upload tests do not seem affected at all, with CPU staying around 1-2%.
I have also tested removing all QoS rules other than ACKs for both download and upload classes. This has also reduced the amount of CPU used, but not as much as turning QoS off.