From the command line, you can work down through the file system with something like du -h -d 1 / . Start at / and then replace the / with the largest result of the command. (note this logic fails with bind mounts and, possibly, symlinks, and you need to be a bit more careful interpreting the results).
My production system has backups around 340MB with logs.
I just ran the backup excluding logs and it gave a file of 270MB.
With the size of 700MB was that after the backup had completed or was it still backing up? If it had completed then you can view the backup file with an archive viewer application. Although the extension is .ipfire it is a standard gzipped tar file.
Looking inside the backup it’s mostly /var/cache/suricata/sgh/ being full of 1000’s of smaller files which look like 02714735112378649452_v1.hs ranging in size from 1 KB to 21.2 MB.
I am also having logs issues, to the point I can’t open any of the FR-Logographs reports for yesterday 12/11/2025, normally I see about 10k hits on a regular day on my ipf firewall log, but yesterday I got 16M+ hits and when I try to open reports my cpu goes from 40C to 80C+ and it gives me internal server error. I did a backup with logs, and it went from no logs 32MB to 257MB with logs, now all the reports are very slow, so I can’t see what or who was doing what to my RED.
What does it say that the total size of that directory is. On my system it shows 607MB but after running the tar command used for the backup the compressed tarball of just that 607MB directory is 100MB. So the files are quite compressible.
So if that directory is causing the size of the backup you should have several GB of those cache files.
Also what time period are those cache files covering. In my system I have files from Oct 28th to today so under two months. So either the cache is a new thing or old entries are cleared out after a certain time.
Okay just checked and 28th Oct is when CU198 was released.
Checked a CU197 backup and the sgh directory is not there. So it looks like the directory was created as part of the move from suricata-7 to suricata-8.
In the release announcement it says
Upgraded to Suricata 8.0.1, the IPFire IPS now caches compiled rules for near-instant startup,
I think a cache directory does not make much sense to have in the backup because if you restore it some time later it won’t have any relationship to current traffic.
I will look at submitting a patch to exclude the sgh directory from the backup.
In the meantime users can exclude that directory by adding it to the
On my system this directory is currently 5.1GB with 5,697 files inside, but compressed into the backup .ipf file it’s about 826MB.
The files are dated from 23rd October until today, but IPFire was installed on this system on 12th October.
I agree. After looking into this a bit more, it appears to be the Suricata Hyperscan MPM cache. These binaries are created when Suricata starts, and you see the CPU spike for a couple of minutes while it builds the cache. I think it’s fine not to include these files in the backup, as they’ll be regenerated when a backup is restored and Suricata is enabled anyway. Excluding them doesn’t affect the IPS log or the intrusion prevention logs either.
However, reading through this Suricata blog post (Faster Suricata startups with Hyperscan caching - Suricata), it says that this directory is never cleaned/maintained automatically, so it will always increase in size over time. Perhaps when Suricata is disabled in the WUI, this directory should be purged to help clean things up if needed.
Okay, you must have a larger number of ruleset providers and more rules selected than what I have.
I am just using Emerging Threats and have 14 of their rulesets selected and just using their default selection of the rules in each ruleset.
As you see the first files from 23rd Oct, can you confirm that you updated to CU198 Testing on that date on that system? If not then it means that you were seeing cache files already with CU197, which would mean my explanation of the problem would be wrong.
It does say that there is a Pull Request for pruning stale cache files.
The second link is for the latest version of the pull request. Currently it is under Review and Checks and it has two failing checks out of 60 checks.
I would suggest that we wait for the pull request to be merged and when we do an update to that version we will get the automatic pruning of the cache.
I don’t think we should do something specific to IPFire when suricata will have its own approach.
I stopped Suricata yesterday and deleted the .hs files.
After the restart, 61 files were created, and this morning after the automatic rules update, 36 new files were created in the cache.
And this message in /var/log/messages corresponds to the number of files. Dec 13 08:57:41 ipfire suricata: [19641] <Notice> -- Rule group caching - loaded: 61 newly cached: 36 total cacheable: 97
I also have these errors in /var/log/messages:
Dec 13 08:57:30 ipfire suricata-reporter[19631]: Failed to process: database is locked Traceback (most recent call last): File "/usr/bin/suricata-reporter", line 335, in run self.process(event) File "/usr/bin/suricata-reporter", line 422, in process return self.process_alert(event) File "/usr/bin/suricata-reporter", line 435, in process_alert self.db.execute("INSERT INTO alerts(timestamp, event) VALUES(?, ?)", sqlite3.OperationalError: database is locked
Dec 13 08:57:30 ipfire suricata-reporter[19631]: Failed to process: database is locked Traceback (most recent call last): File "/usr/bin/suricata-reporter", line 335, in run self.process(event) File "/usr/bin/suricata-reporter", line 422, in process return self.process_alert(event) File "/usr/bin/suricata-reporter", line 435, in process_alert self.db.execute("INSERT INTO alerts(timestamp, event) VALUES(?, ?)", sqlite3.OperationalError: database is locked
This error message also appears several times a day.
This lock problem was found after CU198 had been released. The fix has been implemented and is in CU199, which is in Testing phase and will likely be released early in the New Year.