I have been closely following the recent Cambrian explosion of language models. These tools are based on error correction and allow machines to do things to a degree unthinkable even few months ago and they will progress following a curve similar to Moore’s law. We are just at first frame of the movie that is going to play under our eyes. Dismissing these language models because right now they cannot code or write better than us is a huge mistake. Regardless how we feel today, we will underestimate the impact this technology will have on our lives, but at least let’s try to pay attention. This concerns also developing firewalls.
These machines will be used to do all sorts of things in “cyberspace”, including exploiting software and attacking networks. Considering how fast these things adapt, I believe we will need another AI to defend against this kind of threats. An arm race is probably already starting.
So I came to the conclusion that soon we might need to integrate language models into Firewalls.
I asked chatGPT about it, here is the link its response. Since I will delete this link, here is the message:
Yes, it is entirely possible, and research in this area has been ongoing for some time now. An AI model can be trained to recognize network attacks and attempts to exploit software weaknesses by using machine learning techniques, specifically those related to pattern recognition and anomaly detection.
An AI model can learn to identify typical network behavior, and when it observes behavior that deviates significantly from this norm, it can flag this as potential malicious activity. This kind of anomaly detection is particularly effective against zero-day attacks, which are new and unknown threats that conventional security solutions might not recognize.
Similarly, AI can learn the patterns and characteristics of known software vulnerabilities and use this knowledge to spot potential weaknesses in software systems that could be exploited by attackers.
Once the AI model has been trained, it could indeed be integrated into a firewall or any other security solution to provide real-time protection against threats. When the AI detects a potential attack or exploit, it could take a number of actions such as blocking the suspicious activity, alerting a human operator, or even taking more sophisticated actions to counteract the threat.
However, it’s worth noting that while AI provides a powerful tool for cybersecurity, it is not a silver bullet. False positives (legitimate activity flagged as malicious) and false negatives (malicious activity that goes unnoticed) can still occur. Therefore, AI is typically used in conjunction with other cybersecurity measures rather than as a standalone solution.
As of my knowledge cutoff in September 2021, companies like Darktrace, Vectra, and others are already using AI for network security, threat detection, and response. With the continuous advancements in AI and machine learning, these systems are expected to become more accurate and efficient.