I want to turn a refurb Dell server into a DIY firewall. Is it realistic to expect anything even close to 10G port-to-port throughput when transferring big zip files through ipfire (assuming the other end is capable of delivering that)? What sort of hardware spec should I aim for? Any special 10G NICs?
@geekprophet , welcome to our community.
It depends on the HW you want to ārecycleā.
What sort of NICs are built in and how are they connected to the system?
About 10G port-to-port throughput ( I think port-to-port means server ā client ), you must consider that transfer speed isnāt determined by capabilities of the end points only, but the whole transmission way has to be observed.
I have not purchased the refurb server yet. I can obtain whatever is required in terms of power. Iām thinking an R640. I would add whatever NICs are required to it.
āPort-to-portā speed is a term I borrowed from the OPNsense sales literature.
@geekprophet sorry for askingā¦ in your setup you have access to cheap power?
According to specs, the PSU range for R640 goes from 500W to 1600W. Soā¦ using that powerhorse for āsimplyā a firewall wellā¦ needs some considerations.
Ipfire CPU consumption depends a lot from services enabled, so āsimpleā network switching with rules might a 2nd generation Xeon Scalable CPU a little overkill.
Last but not least: Iām assuming youāre considering for the task āinnerā network transfers betwen Green/green and/or green/blueā¦
Google something like āpcie speedā and youāll get refs like PCI Express - Wikipedia which has a comparison table for PCIe version and maximum speed with various configurations. Youāll then need to determine which version PCIe slots the Dell has and how wide they are (x1, x4 etc) and remember for throughput youāll need 2 NICs.
Keep in mind that the used slot for a nic must have at least 2.5x pcie speed because there is some overhead and the nic is bidirectional and the pcie not. Also the pcie root of the cpu must around 5x because it has to hande two ports at the same time. So this can also the bottleneck. Off course you need aƶsp plenty of cpu power for packet handling.
So for 10G you need 25G pcie per slot and 50G on the cpu.
50G on the CPU? That canāt be CPU speed, but lane bandwidth? Where do you see that figure? I would assume that if the PCIx can handle the bandwidth then the CPU can.
Yes the speed that the cpu can handle via PCIe. I donāt know where you find this in datasheets but i know that for example an Intel N3160 cannot handle the 5Gbit needed for two 1G Lan Ports.
I mentioned the R640 because we already have a bunch of them, and they are fast and reliable workhorses. Ours have 25G NICs and iperf shows 22G throughput from server to server. Iām assuming they would work well for a firewall, and I can get a refurb one for cheap. However, if they are overkill, we could also go will something less powerful and cheaper. The other part of my question is about ipfire itself. Iām wondering if it can be expected to provide 10G throughput (ingress one interface and egress the other) on a server like that?
-Eric
Regarding āinnerā transfers (green/green or green/blue), no, Iām planning to use ipfire at the edge, so itās outside interface would be Internet facing and its inside interface would connect to a DMZ VLAN. Is this a bad idea for some reason?
-Eric
If it works well for 1 x 25G NIC, then I would assume it would work well for 2 x 10G NICs. It would would be fine but probably overkill for a firewall. Even if the PSUs are rated between 500w and 1600w it does not mean the system will be drawing that much especially if it only has 1 SSD or HDD and you wonāt need much memory either. The CPU is well OTT
TBH, with something like that you could get quite cute and run an o/s like Proxmox. Then give 2-4 cores to a VM running IPFire and use the rest of the power for something else.
In the beginning i was toying with the idea of using my Lenovo P920ā¦ but quickly realized it was over kill
how is it going
Its not in the datasheet, but the datasheet does reveal its more of a small SOC since it only has 4 PCIe2.0 lanes that your choices to build a mother board in certain slot + on board configurations (1x4/2x2/1x2 + 2x1/4x1 ) and you have to mux or share PCIe lanes. I could build a motherboard with it and a couple of 1G ports, but I would have to share the onboard SATA and USB ports PCIe to accomplish this.