After Changing ISP's Red0 Doesn't see Full Bandwidth

Hey everyone,

Here’s the scenerio.

1 GB Download (RX) with 40 MB Upload (TX)

I relocated and took IPFires installation with me, as it was. It was super fast at my old location with bi-directional 1GB fiber, but where I am now, 1 GB download is what I am paying for. Since I have been here, 3 months, I have had 9 ISP techs onsite here, and am still am having a slight shortage of incoming bandwidth, BUT, at least it is floating around 800 MB at the wall plate, and modem. That seems to be good to go there, however, I was still getting a crap 250 MB download inside on my hardwired Red0 Interface.

I have been troubleshooting like a mad man, because it is interfering with a lot I have going on. With that said, I installed the speedtest cli tester in the IPFire Kernel with Pakfire and run it, still… 250 MB on the high side on download. I then put a 1 GB unmanaged switch between the ISP modem and the IPfire so I had an inline tap, and kept testing with the same results.

Finally I said, screw it, and pulled out the Red0 ingress cable (Cat-8) and plugged my computer directly into the Switch connected to the modem, then reset the modem. Low and behold I am getting 800-900 MB on the download consistently.

Keep in mind, I have had a solid IPFire instance for a while now, just going through the upgrades. I move, and then all of heck breaks loose with not being able to get the full bandwith capabilities on Red0 for some reason. I have yet to put my finger on what is causing it.

I read some articles and temporarily disabled offloading on my NIC - didn’t work.
I read more and completely shut off QOS because I seen complaints it was hacking folks bandwidth. - That also didn’t work.
I also tried disabling Auto-negotiation - Also didn’t work.
I also tried disabling TCP offloading on the network card interfaces (red, green, blue) - Nope, that didn’t work either.

I didn’t see the point the write the setting back to /etc/sysconfig/rc.local without being able to determine that there was a factual solution

This is a 4 port - Full Duplex capable network card, and it was just darn fast at my last location, but now… uggg…

Can you help me determine what the issue is please?

[root@ipfire /]# ethtool -k red0 > ~/red0settings
[root@ipfire ~]# cat red0settings

Features for red0:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: on
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: on
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]
[root@ipfire ~]# nano turn_off_offline_on_nics

[root@ipfire /]# ethtool red0
Settings for red0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on (auto)
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

[root@ipfire ~]# speedtest
Retrieving speedtest.net configuration…
Testing from Spectrum (My WAN IP Placeholder)…
Retrieving speedtest.net server list…
Selecting best server based on ping…
Hosted by ISP1 (Location 1) [92.06 km]: 31.701 ms
Testing download speed…
Download: 205.47 Mbit/s
Testing upload speed…
Upload: 37.59 Mbit/s
[root@ipfire ~]# speedtest
Retrieving speedtest.net configuration…
Testing from Spectrum (My WAN IP Placeholder)…
Retrieving speedtest.net server list…
Selecting best server based on ping…
Hosted by ISP2 (Location 2) [75.77 km]: 17.097 ms
Testing download speed…
Download: 228.72 Mbit/s
Testing upload speed…
Upload: 41.58 Mbit/s

In addition, I SSH’d in, ran setup, changed the red0 to a bogus static IP with Bogus gateway, saved settings, then went back in and put it back to DHCP and removed out the bogus information, saved again, and rebooted… Also… Didn’t work LOL

Can anyone point me into the right direction here? It can’t be this hard lol…
Also as an FYI, I rebooted the firewall and it cleared out my temp network card settings, so I could get another fresh run at this.

Lemme know!

What is the type of this nic? Has it enough PCIe bandwich. One NIC need at least one 2.5Mbit PCIe LANE to get full speed so for a quad nic you neet at least a 4xPCIe socket.

I had also seen some CPU’s / System on Chip (e.g. Intel Celeron 3150 that could not reach full speed on its PCIe Lanes.)

Hello Arne,

It is a below. Like I said though, this isn’t new. This has been outperforming doing its job, in IPFire, but now that I moved ISPs its not getting beyond 300 Mbps connection and I have gigabit download speeds from ISP. I assume there is a setting which may not be compatible but I cant put my finger on what it is. Also there have been at least 2 IPFire upgrades since the move.

Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev01)
4x1GB.

Also, I downloaded the new firmware for Linux for this device. I put the tarball on ipfire but can’t compile it as I can’t find the development kernel for the ipfire build.

We have no development kernel. The builder use the same kernel as the normal machines.

IPFire doesn’t ship compilers and development headers with the final distribution. You need to download the source, build it and compile in a build changeroot.

Are Intel i350 firmware not flashed into the card or load via mainboard bios?
I have not seen firmware for this chips in linux-firmware

This could simply be a peering problem between your ISP and those hosting the speedtest. There simply is a bottleneck somewhere in the backbone and therefore the speedtest is limited to that speed. The variance in both tests suggests that.

I thought exactly the same thing, except my bandwith is a steady 850 Mbps when I am directly connected to the modem with my testing laptop across the ISP backbone

When IPFire Red0 WAN facing interface is introduced, my connection drops to 250 ish, which before using this ISP was a solid 950 Mbps on the same exact hardware I am using.

Hello Arne,

This is the newest of drivers for the I350 for Linux. It is software drivers driven.

This may sound silly but doesn’t hurt to give it a try as some carriers may have issues auto neg. between hardware.

Could you try changing the nic’s parameters, hard set the speed to 1000 full duplex vs auto? Restart all devices and test?

Hello Jason,

It is already, or at least it appears to be. I M not sure if you reviewed my network card settings output or I forgot to include it. I did try to turn that off temporarily in any event and tested with no positive results. Reboot cleared the manual config change.

Eric

I have tried multiple speed test sites. If you will also note, when I remove ipfire from the equation, I get proper speeds.

Okay, so let’s try to get to the bottom of this.

Could you please send the output of ip -s link?

Personally, I think an update of IPFire hosed my network card. This was blazing fast before I switched ISP’s. Of course between then and now there have been something like 3 updates of version. I was going to recompile the NIC drivers but the fact they don’t have the tools built in to do thst on the fly makes it a tad bit difficult to work with.

Here is the output.

[root@ipfire ~]# ip -s link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
RX: bytes packets errors dropped overrun mcast
59072921 113071 0 0 0 0
TX: bytes packets errors dropped carrier collsns
59072921 113071 0 0 0 0
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether 00:21:9b:2d:1f:b1 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
3: green0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 80:61:5f:03:33:b0 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
181227082 670750 0 0 0 0
TX: bytes packets errors dropped carrier collsns
302433061 674658 0 0 0 0
4: red0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 80:61:5f:03:33:b1 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
26181642967 22822369 0 0 0 126406
TX: bytes packets errors dropped carrier collsns
2059304139 10456723 0 0 0 0
5: blue0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 80:61:5f:03:33:b2 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
1943304506 9924947 0 0 0 35
TX: bytes packets errors dropped carrier collsns
25730808341 22258256 0 0 0 0
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether 80:61:5f:03:33:b3 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
[root@ipfire ~]#

I do not think that there is any reason for that.

On the other hand, the output doesn’t show anything suspicious. MTU on red0 is 1500. There are no errors.

Can you test latency to your default gateway? Is there anything interesting there? Do you have packet loss when the link is saturated?

The link never gets saturated. I can’t get above 250 Mbps on a 1 gig link since whatever happened…happened.

Let me know what commands and output you want to see and I will send it.

Eric

Michael, Here are the stats from the command line right on IPFire.I let it run for a bit before I stopped it.

— xxx.xxx.64.1 ping statistics —
287 packets transmitted, 287 received, 0% packet loss, time 286407ms
rtt min/avg/max/mdev = 2.714/18.373/134.830/23.905 ms

To add more, this is the graph showing a history of time to the gateway at my ISP.

and to zoom into the week

Here is what my LAN speed is directly connected to the gig switch on the firewall according to fast.com

Here is what my LAN speed is directly connected to the gig switch on the firewall according to www.speedtest.net, which is hitting a local testing server within like 50 miles of me.

I don’t get why this connection is having problems. Like I said, at my old ISP in a different location, I was maxing out the Red Gig Port on IPFire.