Just share this new feature I developed with eBPF XDP to monitor or block TLS/SSL encrypted website access from your network. high packet process efficiency, low overhead, totally transparent, no proxy required, no configuration required in client computers. easy to use with WebUI. https://youtu.be/1MTKcaHiLX4?si=S-UebJh-VbDooZB6
Thanks for sharing your referred youtube link video.
Would you kind consider to write down some info about this feature works and how is better/more featureful than other âconcurrentâ solution available?
I mean, youâre not âadvertisingâ your fork here, hu?
sure, I will briefly just explain how it works in my fork BPFire, I have kernel with eBPF enabled which allows me to attach eBPF XDP program to green0 interface, this XDP program can extract each TLS/SSL client hello server name (not encrypted) received by green0 interface from green network clients, the XDP program can log the server name to BPFire system log (/var/log/messages) for monitoring, it can also block the TLS/SSL connection based on the extracted server name by looking up pre-populated server name blocklist map. it is efficient because the XDP program runs from the driver in the kernel, it bypass and does not require netfilter firewall chains, tcp stack, socket, proxy like squid or SSLproxy, or Surricata IPSâŚetc. it is transparent because no proxy configuration required in green network clients.
It canât be only good thing, right? the XDP program canât extract server name from chrome browser because chrome has > 1500 bytes clienthello packet payload which is segmented by MTU and the server name falls in the second tcp segment, XDP program canât do tcp stream reassembly, so canât extract the server name, Suricata IPS can do tcp reassembly, and has SNI filter rules, so no such problem.
Yes, I would like to plug in my fork here in the community so community knows there is something else out there based on IPFire, through this interaction, maybe we all can learn from each other. there must be something done well in IPFire that I chose to fork my work based on IPFire :).
Do I understand right? The XDP program run by eBPF resolves the destination FQDN from the destination IP of a TLS/SSL packet.
The standard behaviour of the stateful inspection firewall of IPFIre is âallow connections initiated by a local device onlyâ.
A connection ( especially in case of TLS ) is built up in two steps
query the IP for the destination FQDN
connect to this IP
This means connections to be blocked, first present the FQDN to the IPFire system. The DNS resolution can easily be filtered by some âtoolsâ The RPZ mechanism is one of them., which can be installed without any kernel change.
Thatâs a problem.
Just another layer.
The whole Firewall engine would need redesigned.
Still need Iptables.
Still need Suricata.
This probably workers great on ISP routers to block stuff the Government wants blocked.
the XDP program act like a man in the middle sniffing raw packet received by green0 interface from all computers in green network, it does byte by byte decoding of the TLS/SSL client hello packet to extract the server name out of client hello, see the code here xdp-tools/xdp-sni/xdp_sni.bpf.c at master ¡ vincentmli/xdp-tools ¡ GitHub, so it does not involve DNS/FQDN resolution, it does not handle packet initiated from ipfire itself, it is not stateful firewall. XDP is designed for fast packet processing/filtering/DDoS, people could build stateful firewall based on AF_XDP which is kind of like AF_PACKET, but that is not the case here.
XDP program attached to green0 interface does not handle packet initiated from IPfire itself because it can only handle received packet on green0 interface (RX).
XDP is not designed to replace stateful firewall, it is designed to do dirty quick low hanging fruit raw packet filtering. imagine DDoS scenario like SYN flood, you donât want the SYN flood packet go through stateful firewall iptables which is costly, the DDoS can be easily just dropped at the driver layer to save memory and cpu resources, see L4Drop: XDP DDoS Mitigations, average home ipfire users do not need this power of course :), but it can be available if users prefer.
After some studies about XDP, the new layer in the network stack, and eBPF, the runtime virtual machine for the tasks to do, I want share some thoughts about this concept.
to place a newer layer between the driver and the remaining network stack isnât a very new idea. It was realised by the SoftStacks component of OS/9 many years ago, f.e. It allows a flexible way in implementing the networking part of an OS. But XDP doesnât put a real new layer. It implements just hooks in the driver, which must support this.
eBPF defines a virtual machine for the operstions of these hooks. Verification is only done on operational aspects ( if the function may return in a short time ). This implies two problems
2.1 the halting problem is undecidable, so the check may result false positve
2.2 for a hardened internet access device it is necessary to check the algorithm for misbehaviour
the XDP/eBPF concept is very interesting in developing new features. But a device like IPFire must not be flexible in his main part when running in production. The behaviour of the system, especially the network stack, must be predictable. The XDP hooks break this!
on the part of support it is very difficult to help users of a system with too many freedom. The main support in this project is done by users with limted time ( time could be expanded, but this would mean to go towards paid support only ).
My opinion about IPFire and possible extensions: IPFire should have a fixed OS ( kernel in Linux categories ). Changes can be only made by CUs. Extensions can be good tool for the development process, the way from one core update to the next.
Last sentence about this topic. Advertising of new ideas is done in the development mailing list!
the main XDP design goal is for hight packet rate processing which might not be the main use case for IPFire targeted audience. I am not trying to convince IPFire devs to adopt XDP, my goal is to provide an alternative option for IPFire community users because the XDP work is based on IPFire.
I donât really understand who actually has the use case of rejecting 100 GBit worth of TLS handshakes a second?
I think so far we have not seen anything that is actually going to be a real win for IPFire users. XDP currently comes with very simple BPF programs that are a lot less flexible and have way fewer features than let us say iptables. Therefore I think the âperformance advantagesâ are just all very theoretical.
I am not talking about high performance for this TLS/SSL use case, just talking about XDP in general.
it all depends IPFire use cases, in general I agree with you most of IPFire users seems to be home based or small business users, so why bother with the raw high packet rate performance that they may never require.
XDP âperformance advantagesâ is not all theoretical. I have been working on network performance over decade for 500 fortune customers for my $day job, my experience and the kernel XDP developers, companies like facebook, cloudflare disagree with you about XDP âperformanc advantagesâ are all just theoretical. if it just theoretical, why the popular high performance network vendors spend their developer time to support XDP
IPFire is great because it build the foundation for me to extend with XDP so I really appreciate that. donât get me wrong, I am actually advocating IPFire and hope more users would choose IPFire. if 1 out of 1000 IPFire users think they have special need that XDP might be the answer, then I can provide the option.
If your goal is an XDP enhanced IPFire, you have to participate in the dev process. This means mainly revising core modifications, communicate in the dev list, manage the possible additions, suggest ways to prevent âdoor openersâ, âŚ
Users expect the same security as with an original IPFire system.
BTW: companies like facebook, cloudflare etc. are may have other goals than just to optimize throughput.
if you mean main use cases for XDP high packet rate processing, DDoS is one of them, basically, any high packet rate processing cases requires no stateful tracking, and suricata IDS/IPS that IPFire uses, actually also has XDP use case 21.4. eBPF and XDP â Suricata 8.0.0-dev documentation. oh and here is what ChatGPT says, or maybe ChatGPT is biased and canât be trusted
XDP (eXpress Data Path) is a high-performance, in-kernel network data path feature that is part of the Linux kernel. It enables users to process network packets directly in the network driver, bypassing much of the traditional networking stack. XDP is designed to handle network traffic with minimal overhead, providing a way to perform custom packet processing at extremely high rates.
Hereâs an in-depth explanation of why XDP is high-performance and what use cases it is particularly good for.
Why is XDP High Performance?
Runs in the NIC Driver (Early Packet Processing):
XDP allows you to process network packets as soon as they are received by the network interface card (NIC), before they traverse the Linux networking stack (e.g., IP, TCP/UDP layers). This means the packet can be handled with minimal CPU instructions and at a much earlier stage than traditional processing.
The packet is processed in the context of the device driver and often before allocating resources like memory buffers (skbuff). Avoiding these allocations saves valuable CPU cycles.
Bypasses the Networking Stack:
The traditional Linux networking stack has multiple layers (Ethernet, IP, TCP/UDP, socket handling, etc.), each adding overhead in terms of packet processing. XDP skips these layers, allowing the user to directly act on the packet without passing through them, thereby reducing latency and increasing throughput.
Avoids Memory Allocations:
XDP processes packets before they are allocated into the kernelâs memory buffer (sk_buff), which is used by higher layers of the networking stack. This avoidance of memory allocation and deallocation significantly reduces overhead and boosts performance, especially under high packet load.
Runs in Kernel Space:
XDP programs run in the kernel space via BPF (Berkeley Packet Filter), which avoids the context switch to user space that occurs in traditional packet processing. Context switching between kernel and user space is expensive and adds latency. Since XDP operates entirely in kernel space, this overhead is eliminated.
Programmability and Flexibility:
XDP leverages eBPF (extended Berkeley Packet Filter), a highly efficient and programmable bytecode system. This means that custom packet-processing logic can be written and loaded into the kernel without modifying the kernel itself, offering performance close to hardware-based solutions but with the flexibility of software.
eBPF programs are just-in-time compiled (JIT) to native machine code, ensuring the execution speed of XDP programs is extremely fast.
Zero-Copy Packet Processing (XDP_TX):
XDP can operate in zero-copy mode, where the packet does not need to be copied from one buffer to another. Instead, it can be processed and then immediately transmitted back out (e.g., for forwarding or load balancing). This avoids additional overhead from copying data between memory spaces.
Low Latency Path (Hardware Acceleration):
XDP is closely integrated with modern NICs, and some NICs even offer hardware offloading for XDP, meaning the packet processing can be offloaded to the NIC hardware, further improving performance by freeing up CPU resources.
Efficiency in Dropping Unwanted Packets:
XDP can be used to drop unwanted or malicious traffic (like DoS attacks) very early in the packet processing pipeline. Since packets are dropped before any significant CPU or memory resources are consumed, this leads to better overall system performance under high packet load.
Key Performance Factors in XDP:
Fewer Instructions Per Packet: Because XDP runs early in the packet pipeline, fewer CPU instructions are needed to process each packet, leading to higher throughput.
Reduced Latency: Packets can be processed and forwarded with very low latency due to minimal overhead.
High Throughput: XDP can handle millions of packets per second (Mpps), making it ideal for applications requiring extremely high packet processing rates.
Use Cases for XDP
XDPâs high performance makes it ideal for several use cases, especially in high-throughput, low-latency environments. Below are some key use cases:
DDoS Mitigation:
XDP is well-suited for denial of service (DoS) protection, particularly against high-volume Distributed Denial of Service (DDoS) attacks. Because XDP can inspect and drop packets directly in the NIC driver, malicious traffic can be filtered out before it consumes system resources.
XDP can handle large packet volumes and drop attack traffic early in the pipeline, thus protecting the systemâs core services from being overwhelmed.
Load Balancers:
XDP is often used to build high-performance load balancers that distribute incoming traffic to backend servers or services. Since XDP can process packets at wire speed and reroute them, it provides low-latency load balancing that scales with high throughput.
An example is Katran, Facebookâs open-source XDP-based load balancer that scales to millions of requests per second.
Packet Filtering and Firewalling:
Packet filtering (e.g., based on IP, port, or protocol) is another common XDP use case. XDP can drop or allow packets based on custom criteria, making it useful for implementing network firewalls or access control lists (ACLs) at very high speeds.
For example, XDP can filter out specific traffic patterns before the packets reach the firewall or other security layers, reducing processing load on these systems.
High-Performance Network Functions (NFV):
XDP can be used in Network Function Virtualization (NFV), where various network functions like routers, firewalls, and packet inspection engines are virtualized. These functions can be offloaded to XDP for ultra-fast packet processing.
NFV services like load balancers, NAT, and packet classification can run on XDP to achieve near line-rate packet processing, making them more efficient in cloud and virtualized environments.
Packet Forwarding:
XDP can be used for fast packet forwarding by inspecting incoming packets and forwarding them to the appropriate interface based on predefined rules. This is useful for building low-latency software routers or switches.
Since XDP forwards packets directly within the network driver, it can achieve wire-speed forwarding with very low CPU overhead.
Telemetry and Network Monitoring:
XDP can be used to implement high-speed telemetry and monitoring solutions. For example, it can capture and log packet metadata (like headers or traffic statistics) without actually processing or forwarding the entire packet.
This enables operators to monitor network traffic at very high volumes without degrading network performance.
Custom Packet Processing:
XDP is suitable for custom packet processing applications, such as traffic shaping, packet transformations, or custom protocol implementations. Users can define how packets are modified, forwarded, or dropped entirely based on custom logic executed by an XDP program.
For example, XDP can be used to modify headers for encapsulation (e.g., for tunneling protocols like VXLAN) or for performing custom Quality of Service (QoS) handling.
Low-Latency Service Mesh:
XDP can be integrated into service meshes to handle low-latency packet routing between microservices. Since service meshes route internal service requests over the network, XDP can speed up the traffic routing by processing and forwarding packets directly in the NIC driver, reducing the overhead compared to traditional software-based routing.
Summary of XDPâs Advantages:
Early packet processing in the NIC driver avoids the overhead of the kernelâs networking stack.
Minimal memory allocation and resource consumption due to packet processing before sk_buff allocations.
Low latency and high throughput because of reduced CPU cycles per packet.
Flexibility from eBPF, allowing custom logic to be applied in the data path without modifying the kernel or relying on user-space daemons.
Integration with hardware acceleration, making it even faster in some NICs.
Conclusion:
XDP is a powerful tool for building high-performance, low-latency packet processing systems. Its ability to handle network packets in the NIC driver before they traverse the networking stack makes it an ideal choice for use cases requiring high throughput and minimal overhead, such as DDoS mitigation, load balancing, packet filtering, and NFV. By combining kernel-level efficiency with the flexibility of eBPF, XDP delivers both speed and customizability, making it a go-to solution for network-intensive applications.
I donât feel IPFire devs are ready for XDP, I think it is up to how many community users wanting this XDP performance and flexibility ( not many in my impression)
I take specific hardware to support it.
Most users do not have.
In a commercial application
It can capture meta data at full speed.
Great if your a ISP.
here is outdated list of hardware supporting XDP xdp-project/areas/drivers at master ¡ xdp-project/xdp-project ¡ GitHub
my $100 mini PC with Intel Corporation Ethernet Controller I226 I use at home also support it natively. XDP also comes with software SKB mode that it can run any NICs, but of course performance is not good as natively supported XDP driver.