Strange squid process behavior

what version of squid is currently implemented in ipfire?

I would like to open a ticket on the portal, I found a ticket with an error that prevented browsing on https ports, but as I’m not an expert I can’t figure out if it’s right for me.

This a link

In any case, the anomalous behavior occurs on all the firewalls that I monitored, and it has appeared since January 2023.

Would opening a support case on ipfire’s bugzilla make sense?

5.7 which is the latest release.

You could but without some information as to what is wrong in IPFire that is causing it it will be difficult to figure out what to go and fix.

I don’t see those log messages that you are getting. If other people report the same thing then we might be able to figure out what the common setup is and see if it can be replicated on another machine not having the problem. Once able to replicate then a developer can investigate it one of their machines.

I am not saying that you shouldn’t raise an IPFire bug but I think you need to be clear that IPFire is the cause of the problem. Raising a bug at both squid and IPFire is not a good idea. Both of them are unlikely to be the root cause at the same time.

You could try re-installing Core Update 171 on one of the machines having the problem and restore the settings and then see if that one no longer has the problem when the others still do.
That would then be a good indication that IPFire is somehow linked to it.

Thanks for the reply, how do I reset the 171?

looking at the squid log, I realized that a few seconds before the request storm starts, the log reports of cache.log:
2023/02/07 08:49:31 kid1| WARNING! Your cache is running out of file descriptors listening port:
2023/02/06 09:08:01 kid1| WARNING! Your cache is running out of file descriptors listening port:

This appears in all the firewalls I’ve monitored

I don’t know if it can be useful

According to the wiki page on the web proxy this can occur if you have a large number of clients or high/unusual traffic. A limit is then reached where no more cache files can be opened.

It then suggests increasing the number of file descriptors by 1024 or more. Increasing this number will increase the memory consumption.

See wiki page of file descriptors

From your earlier info it sounds like you don’t knowingly have high/unusual traffic so while the above might mitigate the problem, if you have high traffic levels which you don’t expect or know of then it doesn’t solve the problem.

Might be worth increasing the file descriptors entry and see what happens. See if the WARNING about file descriptors no longer occurs and also the 16,000 odd entries in the log.

You can’t reset to Core Update 171. You have to take a backup and save it on another computer and then do a fresh install from scratch and then restore the backup.


I doubt it, as the transparent proxy in IPFire is only on port 80.

1 Like

I will be prepping a new car with the 171 this weekend.

For the descriptors I tried to increase them as the wiki says.
I tried with the command grep “WARNING” /var/log/squid/cache.log
and after augmenting the descriptors it doesn’t report the warning.
But couldn’t the cache manager help?
I tried looking for info but I didn’t find much and accessing the various items without knowing what to look for isn’t much use.
Another strange thing, why if I put 600 MB as the memory cache size limit in the services graph do I see an occupation of 800MB?

My network hardware hasn’t changed in the last month, but moreover I don’t think it’s related to this aspect, since I see the same anomalies, that of the storm of requests, also on 3 other systems.

Have these 4 systems something in common? Equal applications, similiar purposes?

no nothing in common except that everyone has installed:

But before January I didn’t notice these anomalies, for example my graph of processes in memory regarding squid has always stood at 200 MB on average in the last year, only since January has it gone crazy.

I noticed, because on all four systems, navigation was slow and sometimes the fact was reported that some sites that maybe worked a few minutes before, the browser reported that they weren’t reachable.

I logged into ipfire, cleared the cache and everything worked again.

I didn’t mean the IPFire systems, but the attached networks.
Squid handles the web requests of the LAN. Without these there will no problem. :wink:
Have you tried to switch off squidclamav? I don’t remember an issue with this at the moment, but you never know.

The networks are different, I have about 30 home automation devices, but they were there even in December, one is an office with 2 people and the other is my parents’ firewall.

I have squidclamav deactivated from 10 days just to understand if it was him

Maybe this and this helps.

There are other topics about clamav found by searching.

Thanks, but does clamav have any logs?
I’ve looked in /var/log but can’t find anything

In the wiki page link I gave earlier in this thread related to the cache it mentions that the cache memory value is not a fixed limit but under high load squid can override the value.

There is also a note that the cache memory value is not the total memory footprint of the cache.

1 Like

I believe that you have to grep in /var/log/messages or you can look in the wui menu Logs - System Logs and then select ClamAV in the Section: drop down box and then press the Update button.

1 Like

Could it be the content filter causing this problem?

I’m telling you this because I tried disabling it on one of the firewalls and it’s been 4 days since the request storm last appeared.

Good morning,
we are on the third day with the url filter disabled and the connection storm has not yet reoccurred, all four firewalls involved are behaving the same way, I keep my fingers crossed.

1 Like

Update on the situation,
it is now 10 days that on all 4 firewalls, with the content filter deactivated, the strange storm of requests does not appear.

Now I’d like to know why this happens.
I’ll try to reactivate the filter to see if everything comes back.


Good morning,
it’s been 20 days now that with the content filter deactivated, the problem of the storm of requests no longer occurs, on mine the only peak is when I watch the amazon prime and disney plus services.
Also for the other firewalls everything is ok.
All are updated to CU 173, I’ll try to re-enable content filtering on mine to see what happens.

Hi @sky7176 ,

I seem to have the same problem as you :frowning:

Did the problem reappear at your place ?

sorry for the delay in the response.
No, it hasn’t happened to me in over a month.
Other than what I indicated in the post, I did nothing, as it appeared it disappeared.
I’ve never been able to give myself an explanation.
DOS to my four firewalls?