LAGs + Bridge configuration via web ui


I’m building up a new hardware environment because running 2 PCs (one for IPFire and one as a 10G switch) for my network is kind of stupid and I can easily combine them to one.

So I put all the NICs together and need to configure 2 LAGs and some Ports with the LAGs in Bridged mode for green:

Green NICs:

4x 10G: 2x (LAG1) to a server; 1x uplink to a 10G switch; 1x reserve
4x 1G: 4x (LAG2) to a 1G Switch

So the LAG1, LAG2 and 1x 10G uplink need to be in bridges mode for green. Will that work or conflict with the web ui implementation?

I’m not sure if the LAGs will show up there or even the system will erase any network script as it did with some update (@arne_f script to bridge NICs).

1 Like

So what is a LAG here? You seem to use the term for two different things.

LAG == link aggregation group

I want use LACP.

1 Like

I think it is a bad Idea to use the Software bridge to replace a switch. (even with 1G this can be a bottleneck) This create much PCIe and CPU Load. A Nic in a bridge runs in promisciuos mode (every packet is hand over to the kernel, not only Broadcasts and the ones for the assigned Address)

The mentioned script was implemented before the webgui vlan and bridge functions was added and should not used anymore.

I have not used LAG’s yet. Also Im not sure if IPTables can handle more than one 10G Link.

I have run that config before: with less powerfull pcs and endian. endian also runs with linux kernel. Now I just wanted to take it on ipfire. But I understand and know your concerns. With the intel driver I can define the LAG/team as “Adaptive Load Balancing” which don’t need a specifi switch behind to work with, but I tried to understand how it is supposed to work both ways and couldn’t find a answer yet, since the decriptions just look at clients outgoing, but not incomming site:

Maybe something can be configured with the driver as well and this is an better option?


so you can use the kernel’s bonding stuff to use LACP. Note that you won’t be able to combine throughput.

Is it a good idea to switch 40G in software? No. Arne is right.

But I suppose you know what you are doing.

1 Like

Yeah I know that and bought myself a 10G Switch so the fw will have just 1x 10G uplink to GREEN (switch) that I don’t have a bottleneck with GREEN to RED + BLUE in case I copy files from BLUE (wifi max in practice 800MBits) and download stuff from RED (400MBits). This will last for the next few years.

I did that before with endian and haven’t had problems with a small Ryzen 240G CPU so the Ryzen 1600 will probably laugh about it, but yeah it’s not the best practice and the switch was about time.