A chat with chatGPT, should LLM models be integrated in Firewalls?

I have been closely following the recent Cambrian explosion of language models. These tools are based on error correction and allow machines to do things to a degree unthinkable even few months ago and they will progress following a curve similar to Moore’s law. We are just at first frame of the movie that is going to play under our eyes. Dismissing these language models because right now they cannot code or write better than us is a huge mistake. Regardless how we feel today, we will underestimate the impact this technology will have on our lives, but at least let’s try to pay attention. This concerns also developing firewalls.

These machines will be used to do all sorts of things in “cyberspace”, including exploiting software and attacking networks. Considering how fast these things adapt, I believe we will need another AI to defend against this kind of threats. An arm race is probably already starting.

So I came to the conclusion that soon we might need to integrate language models into Firewalls.

I asked chatGPT about it, here is the link its response. Since I will delete this link, here is the message:

Yes, it is entirely possible, and research in this area has been ongoing for some time now. An AI model can be trained to recognize network attacks and attempts to exploit software weaknesses by using machine learning techniques, specifically those related to pattern recognition and anomaly detection.

An AI model can learn to identify typical network behavior, and when it observes behavior that deviates significantly from this norm, it can flag this as potential malicious activity. This kind of anomaly detection is particularly effective against zero-day attacks, which are new and unknown threats that conventional security solutions might not recognize.

Similarly, AI can learn the patterns and characteristics of known software vulnerabilities and use this knowledge to spot potential weaknesses in software systems that could be exploited by attackers.

Once the AI model has been trained, it could indeed be integrated into a firewall or any other security solution to provide real-time protection against threats. When the AI detects a potential attack or exploit, it could take a number of actions such as blocking the suspicious activity, alerting a human operator, or even taking more sophisticated actions to counteract the threat.

However, it’s worth noting that while AI provides a powerful tool for cybersecurity, it is not a silver bullet. False positives (legitimate activity flagged as malicious) and false negatives (malicious activity that goes unnoticed) can still occur. Therefore, AI is typically used in conjunction with other cybersecurity measures rather than as a standalone solution.

As of my knowledge cutoff in September 2021, companies like Darktrace, Vectra, and others are already using AI for network security, threat detection, and response. With the continuous advancements in AI and machine learning, these systems are expected to become more accurate and efficient.


You make a very go point for AI in a firewall.
To alert the operator.

1 Like

Very valid reasoning, but there are some aspects that are more important than others, like having physical access to the firewall is still a very important requirement to hack it. One could also look in to methods of mitigating challenge-response attempts, we already have that in many cases, with attempt limitations and diverse lockout-scenarios, but that is of course rather primitive and in general just useful to mitigate brute-force attacks.

An AI will probably not give you enough information to breach both of the above, but possibly enough information to bypass any of them.

It would however be very good to have AI based profiling and security assessment of the current network, do the hackers job, and implement basic measures and recommend others. I had a discussion with Zyxel support a couple of years ago where I argued that profiling the user/admin and its network would be a huge step towards increasing overall security. We see that development, but then again, I am not very up to date with the most recent advances in that area, lets just say it would probably render current tech rather obsolete. AI is a tool for that.

AIPfire next, perhaps… :shushing_face:


as @hvacguy has suggested, checking the logs in real time as well. It could complement or be part of intrusion prevention system.

Good one.

There is another aspect to consider. AIPFire will have very different hardware requirements compared to IPFire. Besides GPUs, there will be hardware development for integrated systems that are optimized for AI.

1 Like

Did not think of hardware required.
Would not wish it make it a required feature.
So you can still have a lower resource option.
Will say that my next firewall would have a GPU
If AI detection was a feature.

Maybe off topic but maybe apropos :innocent:


Yes, it is difficult to keep track. Every day there is a new one popping up. But this is a good news in my opinion, as we need decentralization, we need more models we can install and use privately and that can compete with these huge models in the hands of multinationals corporations.

Hopefully some opensource project looking for ways to improve network security will emerge. At least, I really hope so.

About privateGPT, there is another project inspired by it that wants to be able to improve its low speed, whose author has just produced a video tutorial on youtube on how to install it. Those projects are exactly what we need. Opensource implementation of these tools that build on each other work.

1 Like

e.g. :smiley:


While in development, sure, new models and new ways, but at some point it needs to be standardized and have fewer actors do better work with it. That contributes to trust and security.
I will never trust a random git dev.

I hear you and this is the logical way to see things. I want just to highlight that there is a fork in this path, left we go the Linux route, right we go the google/facebook route. You would not have trusted Linus Torvalds code in 1993, yet here we are. Hopefully this is the way we go for AI as well, because if we go the other way we have all the problems of today in Internet (and society) 1000X. AI is nothing but a machine that amplifies what we can do with it at a mind bending rate. We have to go the Linux way.



Something like ChatGPT or a LLM could possibly be applied to Quality of Service.

We have turned on IPFire QOS (Cake?). And we seem to have good performance for audio and video in particular over our very small LAN.

But it’s really hard to know if we’re doing the right thing. And it was tricky to learn about. And because we don’t do it very often I can’t say that it’s well tuned. This seems like an ideal application of something like AI as a plug-in, plus a nice higher level UX.


And as a ipfire user.
Are biggest huddle is cash for development.
For I have no such skills.


The biggest hurdle is probably finding more volunteers. And available time for those volunteers. And cash.