Download speed limitation on APU

Recently I switched my internet provider from a cable modem type to a FTTB fiber to the building type with a GPON PPPoE connection via VLAN. The switchover with IPFire worked seamlessly thanks to the great documentation.

However, I am experiencing now a download speed limitation of about 150Mbit/s despite my contract is defining 200 MBit/s as guaranteed speed.
By this and many other similar threads about the PPPoE CPU performance requirements, I almost gave up to achieve the full download speed with my good old APU4D4 hardware and started to seek for a more performant hardware.

Yesterday I found this interesting link PPPoE on fiber with the Linux machine as the router which is sketching a solution for achieving a lower CPU performance demand for PPPoE.

I was wondering whether this could be applied to IPfire, too.

What is your opinion on that?

1 Like

Disable QoS and enable intrusion detection only for red. My APU2C4 reaches nearly full 1000 mbit downstream. Also check if all Nics on apu and clients are set to GB full duplex.

1 Like

Thanks. My downstream rate is low even for QoS off and intrusion prevention off. All relevant ethernet NICs are 1000Mbps and full duplex.

Do you use the PPPoE protocol for your fiber connection?

Never had any problems for VDSL or cable. Shouldn´t be a problem for fibre either.

Agreed, for cable I also had no problem. The issue occurs specifically when PPPoE is involved. Just do an internet search for ‘PPPoE single core performance’ and you will find many reports about this…

My specific question is related to the usage of a PPPoE VLAN connection in a GPON FTTB fiber network.

The above provided link is addressing such a scenario. My question if IPFire could benefit from that approach.

from the article:

I also noted that pppoe ran at 75% CPU during the speed test, which made me suspect that it’s the bottleneck. Spoiler: It indeed was.

Do you have high CPU usage?

Please post a screenshot of this page (the entire page please )

https://ipfire.localdomain:444/cgi-bin/system.cgi

1 Like

Here we go:

A) with IPS switched on:

You can see the effects of the speed test at command line with

for i in `seq 1 300`; do speedtest-cli --no-upload --simple; done

at the right side.

B) with IPS switched off, QoS is off:

EDIT: Now I got iperf3 running as follows:

iperf3 -c speedtest.wtnet.de -t 300 -R -P 10 -i 20

Here the reported download speeds for a few 20 seconds measurement report intervals:

- - - - - - - - - - - - - - - - - - - - - - - - -
[  5] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[  7] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[  9] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[ 11] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[ 13] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[ 15] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[ 17] 180.01-200.03 sec  38.4 MBytes  16.1 Mbits/sec                  
[ 19] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[ 21] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[ 24] 180.01-200.03 sec  38.5 MBytes  16.1 Mbits/sec                  
[SUM] 180.01-200.03 sec   385 MBytes   161 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[  7] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[  9] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[ 11] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[ 13] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[ 15] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[ 17] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[ 19] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[ 21] 200.03-220.06 sec  41.8 MBytes  17.5 Mbits/sec                  
[ 24] 200.03-220.06 sec  41.5 MBytes  17.4 Mbits/sec                  
[SUM] 200.03-220.06 sec   417 MBytes   175 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[  7] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[  9] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[ 11] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[ 13] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[ 15] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[ 17] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[ 19] 220.06-240.04 sec  40.5 MBytes  17.0 Mbits/sec                  
[ 21] 220.06-240.04 sec  40.4 MBytes  16.9 Mbits/sec                  
[ 24] 220.06-240.04 sec  40.2 MBytes  16.9 Mbits/sec                  
[SUM] 220.06-240.04 sec   405 MBytes   170 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[  7] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[  9] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[ 11] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[ 13] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[ 15] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[ 17] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[ 19] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[ 21] 240.04-260.04 sec  39.6 MBytes  16.6 Mbits/sec                  
[ 24] 240.04-260.04 sec  39.4 MBytes  16.5 Mbits/sec                  
[SUM] 240.04-260.04 sec   396 MBytes   166 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[  7] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[  9] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[ 11] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[ 13] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[ 15] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[ 17] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[ 19] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[ 21] 260.04-280.05 sec  41.5 MBytes  17.4 Mbits/sec                  
[ 24] 260.04-280.05 sec  41.4 MBytes  17.3 Mbits/sec                  
[SUM] 260.04-280.05 sec   415 MBytes   174 Mbits/sec                  
^C- - - - - - - - - - - - - - - - - - - - - - - - -
[  5] 280.05-284.19 sec  9.00 MBytes  18.2 Mbits/sec                  
[  7] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[  9] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[ 11] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[ 13] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[ 15] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[ 17] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[ 19] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[ 21] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[ 24] 280.05-284.19 sec  8.88 MBytes  18.0 Mbits/sec                  
[SUM] 280.05-284.19 sec  88.9 MBytes   180 Mbits/sec 

Looks better than the standard speed tests :grinning: I am almost achieving the minimum guaranteed speed of 180 MBit/sec.

Here the CPU performances:

Sometimes there is a 100% bar at the right side for a short moment. Could be an averaging issue.

Here the output of htop running on the APU4D4 during ipferf3 downloads w/o IPS and w/o QoS:

As described in many other forum contributions, the PPPoE download is loading only a single CPU core with almost 100%.

Here the result of ‘breitbandmessung.de’:

If I am switching on IPS, the iperf3 download speed is reduced to about 100 MBit/s.

Let me repeat my inital question:

Yesterday I found this interesting link PPPoE on fiber with the Linux machine as the router which is sketching a solution for achieving a lower CPU performance demand for PPPoE.

1 Like

Unfortunately I can’t answer your question, as it is far beyond my ability even to research the issue. I can only offer one advise: If you distill down the article to the essential point formulated as a simple and direct question, it is possible that one of the the developers might tell you whether ppoe can run in kernel space and eventually how to set it up.

In alternative, you could open a bug report asking for this feature. In this case, you should not expect anyone will have the time to go to the link and parse through the article, and again you should clearly and concisely summarize your request, and offer the link as a reference.

EDIT: By the way, encumbering the kernel with an user-space plugin would mean trading speed for the security of your firewall, personally I would never do that.

Wow! Thank you for the graphs and details.

I see you have a APU4D4 which is what I have. And you have much more CPU Usage than I ever see though I am using a DOCSIS 3 cable modem.

Since this is beyond my skill level I agree with @cfusco - please open a bug report. This will help make sure the Development team reviews this information.

Information to add a bug report in IPFire Bugzilla:

In your report, you should reference your post above (the one with graphs and info).

3 Likes

Thanks for your feedback. I understand that the IPFire developers are very busy with other tasks and seen my proposal ‘just’ as a hint for a possible improvement. Anyway, I will rise a Bugzilla entry as you suggested., however as an improvement suggestion.

My motivation is not to throw away my APU4D4 which is doing an excellent job at a low power dissipation. Saving electrical energy and CO2 pollution is another motivation. Therefore, I am trying to minimize the required electrical power for my internet hardware.

Other firewall software, e.g. pfSense, OPNsense, OpenWrt, … have the same performance issue of pppoe connections.

Therefore, many people with pppoe based fiber internet connections seem to purchase more powerful hardware with a higher CPU frequency. Such unnecessarily more electronic waste and more CO2 pollution are created.

IPFire is using pppoe.so plugin module:

root      2900  0.0  0.1   9480  6036 ?        S    07:01   0:02 /usr/sbin/pppd plugin pppoe.so red0.7 usepeerdns defaultroute
 ....

I need to dig a bit deeper to understand how a pppoe connection is realized by IPFire …

2 Likes

Today I’ve analyzed the ppp dial in messages of IPFire core 182 of my APU2D4 (anonymized) a bit more in detail:

Feb  5 07:01:11 IPFIRE pppd[2901]: Plugin pppoe.so loaded.
Feb  5 07:01:11 IPFIRE connectd[2902]: Connectd (start) started with PID 2902
Feb  5 07:01:11 IPFIRE kernel: PPP generic driver version 2.4.2
Feb  5 07:01:11 IPFIRE pppd[2901]: PPPoE plugin from pppd 2.5.0
Feb  5 07:01:11 IPFIRE pppd[2901]: pppd 2.5.0 started by root, uid 0
Feb  5 07:01:11 IPFIRE kernel: NET: Registered PF_PPPOX protocol family
Feb  5 07:01:11 IPFIRE pppd[2901]: Send PPPOE Discovery V1T1 PADI session 0x0 length 12
Feb  5 07:01:11 IPFIRE pppd[2901]:  dst ff:ff:ff:ff:ff:ff  src aa:aa:aa:aa:aa:aa
Feb  5 07:01:11 IPFIRE pppd[2901]:  [service-name] [host-uniq 55 0b 00 00]
Feb  5 07:01:13 IPFIRE kernel: igb 0000:02:00.0 orange0: igb: orange0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Feb  5 07:01:13 IPFIRE kernel: igb 0000:01:00.0 red0: igb: red0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Feb  5 07:01:16 IPFIRE pppd[2901]: Send PPPOE Discovery V1T1 PADI session 0x0 length 12
Feb  5 07:01:16 IPFIRE pppd[2901]:  dst ff:ff:ff:ff:ff:ff  src aa:aa:aa:aa:aa:aa
Feb  5 07:01:16 IPFIRE pppd[2901]:  [service-name] [host-uniq 55 0b 00 00]
Feb  5 07:01:16 IPFIRE pppd[2901]: Recv PPPOE Discovery V1T1 PADO session 0x0 length 46
Feb  5 07:01:16 IPFIRE pppd[2901]:  dst aa:aa:aa:aa:aa:aa  src bb:bb:bb:bb:bb:bb
Feb  5 07:01:16 IPFIRE pppd[2901]:  [AC-name dsdf1-bng4] [host-uniq 55 0b 00 00] [service-name] [AC-cookie dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd]
Feb  5 07:01:16 IPFIRE pppd[2901]: Send PPPOE Discovery V1T1 PADR session 0x0 length 32
Feb  5 07:01:16 IPFIRE pppd[2901]:  dst bb:bb:bb:bb:bb:bb  src aa:aa:aa:aa:aa:aa
Feb  5 07:01:16 IPFIRE pppd[2901]:  [service-name] [host-uniq 55 0b 00 00] [AC-cookie dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd]
Feb  5 07:01:16 IPFIRE pppd[2901]: Recv PPPOE Discovery V1T1 PADO session 0x0 length 46
Feb  5 07:01:16 IPFIRE pppd[2901]:  dst aa:aa:aa:aa:aa:aa  src cc:cc:cc:cc:cc:cc
Feb  5 07:01:16 IPFIRE pppd[2901]:  [AC-name frnk1-bng4] [host-uniq 55 0b 00 00] [service-name] [AC-cookie dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd]
Feb  5 07:01:16 IPFIRE pppd[2901]: Recv PPPOE Discovery V1T1 PADS session 0x15c length 46
Feb  5 07:01:16 IPFIRE pppd[2901]:  dst aa:aa:aa:aa:aa:aa  src bb:bb:bb:bb:bb:bb
Feb  5 07:01:16 IPFIRE pppd[2901]:  [service-name] [host-uniq 55 0b 00 00] [AC-name dsdf1-bng4] [AC-cookie dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd]
Feb  5 07:01:16 IPFIRE pppd[2901]: PPP session is 348
Feb  5 07:01:16 IPFIRE pppd[2901]: Connected to XX:XX:XX:XX:XX:XX via interface red0.7
Feb  5 07:01:16 IPFIRE pppd[2901]: using channel 1
Feb  5 07:01:16 IPFIRE pppd[2901]: Using interface ppp0
Feb  5 07:01:16 IPFIRE pppd[2901]: Connect: ppp0 <--> red0.7
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [LCP ConfReq id=0x1 <mru 1492> <magic 0x199ad417>]
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [LCP ConfReq id=0xfe <mru 1492> <auth pap> <magic 0x300dc78e>]
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [LCP ConfAck id=0xfe <mru 1492> <auth pap> <magic 0x300dc78e>]
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [LCP ConfAck id=0x1 <mru 1492> <magic 0x199ad417>]
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [LCP EchoReq id=0x0 magic=0x199ad417]
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [PAP AuthReq id=0x1 user="SSSSSSSSSSSSSSS" password=<hidden>]
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [LCP EchoRep id=0x0 magic=0x300dc78e]
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [PAP AuthAck id=0x1 "OK"]
Feb  5 07:01:16 IPFIRE pppd[2901]: Remote message: OK
Feb  5 07:01:16 IPFIRE pppd[2901]: PAP authentication succeeded
Feb  5 07:01:16 IPFIRE pppd[2901]: peer from calling number XX:XX:XX:XX:XX:XX authorized
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [IPCP ConfReq id=0x1 <addr 0.0.0.0> <ms-dns1 0.0.0.0> <ms-dns2 0.0.0.0>]
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [IPCP ConfReq id=0xd6 <addr BBB.BBB.BBB.BBB>]
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [IPCP ConfAck id=0xd6 <addr BBB.BBB.BBB.BBB>]
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [IPCP ConfRej id=0x1 <ms-dns2 0.0.0.0>]
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [IPCP ConfReq id=0x2 <addr 0.0.0.0> <ms-dns1 0.0.0.0>]
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [IPCP ConfNak id=0x2 <addr ZZZ.ZZZ.ZZZ.ZZZ> <ms-dns1 AAA.AAA.AAA.AAA>]
Feb  5 07:01:16 IPFIRE pppd[2901]: sent [IPCP ConfReq id=0x3 <addr ZZZ.ZZZ.ZZZ.ZZZ> <ms-dns1 AAA.AAA.AAA.AAA>]
Feb  5 07:01:16 IPFIRE root: Could not find a bridged zone for ppp0
Feb  5 07:01:16 IPFIRE pppd[2901]: rcvd [IPCP ConfAck id=0x3 <addr YYY.YYY.YYY.YYY> <ms-dns1 YYY.YYY.YYY.YYY>]
Feb  5 07:01:16 IPFIRE pppd[2901]: local  IP address YYY.YYY.YYY.YYY
Feb  5 07:01:16 IPFIRE pppd[2901]: remote IP address YYY.YY7.YYY.YYY
Feb  5 07:01:16 IPFIRE pppd[2901]: primary   DNS address YYY.YYY.YYY.YYY
Feb  5 07:01:16 IPFIRE pppd[2901]: Script /etc/ppp/ip-up started (pid 2957)
Feb  5 07:01:21 IPFIRE vnstatd[2538]: Interface "ppp0" enabled.
Feb  5 07:01:21 IPFIRE connectd[2902]: System is online. Exiting.

Obviously, IPFire core 182 is already exactly using the same pppoe connection approach as in my mentioned link above… Therefore, my question could be answered by myself:: IPFire is already using the optimum way of establishing an pppoe connection :grinning: Great job of the IPFire development team.

The only difference seems to be the used pppoe plugin: pppoe.so versus rp.pppoe.so. But I guess, using the kernel plugin is more secure.

3 Likes