Strange squid process behavior

In the Unbound DNS log I have nothing in that hour in any of the three firewalls

what about:

grep "08:49:57" /var/log/messages

anything helpful in the kernel log in that time?

for that time today it doesn’t report anything to me on the firewall of my house

Let’s think this through. A client in green or blue requests a connection with a web site. Unbound fetches the IP, the request from the client is packaged and sent on port 80 to the kernel, which redirects to Squid on port 3128, which goes to fetch the page but does not receive the HTTP header from the target, triggering a storm of repeated requests from the client. At the very minimum you should see the kernel logs for the port 80 to port 3128 during that time frame.

Do you agree with this?

EDIT: when the HTTP download is successful, I think you should see also the return traffic from 3128 to 80 in the kernel logs.

I agree, but I don’t see anything. tonight I try to make a request and see what happens on the log then I’ll update you.

you may want to try:

grep ":49:57" /var/log/messages

I also tried with grep “:49:57” /var/log/messages, but nothing, around 8 in the morning I have no message in the log.

Today nothing has happened yet for the moment

what version of squid is currently implemented in ipfire?

I would like to open a ticket on the squid.org portal, I found a ticket with an error that prevented browsing on https ports, but as I’m not an expert I can’t figure out if it’s right for me.

This a link

In any case, the anomalous behavior occurs on all the firewalls that I monitored, and it has appeared since January 2023.

Would opening a support case on ipfire’s bugzilla make sense?

5.7 which is the latest release.

You could but without some information as to what is wrong in IPFire that is causing it it will be difficult to figure out what to go and fix.

I don’t see those log messages that you are getting. If other people report the same thing then we might be able to figure out what the common setup is and see if it can be replicated on another machine not having the problem. Once able to replicate then a developer can investigate it one of their machines.

I am not saying that you shouldn’t raise an IPFire bug but I think you need to be clear that IPFire is the cause of the problem. Raising a bug at both squid and IPFire is not a good idea. Both of them are unlikely to be the root cause at the same time.

You could try re-installing Core Update 171 on one of the machines having the problem and restore the settings and then see if that one no longer has the problem when the others still do.
That would then be a good indication that IPFire is somehow linked to it.

Thanks for the reply, how do I reset the 171?

looking at the squid log, I realized that a few seconds before the request storm starts, the log reports of cache.log:
2023/02/07 08:49:31 kid1| WARNING! Your cache is running out of file descriptors listening port: 192.168.12.1:3128
2023/02/06 09:08:01 kid1| WARNING! Your cache is running out of file descriptors listening port: 192.168.12.1:3128

This appears in all the firewalls I’ve monitored

I don’t know if it can be useful

According to the wiki page on the web proxy this can occur if you have a large number of clients or high/unusual traffic. A limit is then reached where no more cache files can be opened.

It then suggests increasing the number of file descriptors by 1024 or more. Increasing this number will increase the memory consumption.

See wiki page
https://wiki.ipfire.org/configuration/network/proxy/wui_conf/cache#amount of file descriptors

From your earlier info it sounds like you don’t knowingly have high/unusual traffic so while the above might mitigate the problem, if you have high traffic levels which you don’t expect or know of then it doesn’t solve the problem.

Might be worth increasing the file descriptors entry and see what happens. See if the WARNING about file descriptors no longer occurs and also the 16,000 odd entries in the log.

You can’t reset to Core Update 171. You have to take a backup and save it on another computer and then do a fresh install from scratch and then restore the backup.

2 Likes

I doubt it, as the transparent proxy in IPFire is only on port 80.

1 Like

I will be prepping a new car with the 171 this weekend.

For the descriptors I tried to increase them as the wiki says.
I tried with the command grep “WARNING” /var/log/squid/cache.log
and after augmenting the descriptors it doesn’t report the warning.
But couldn’t the cache manager help?
I tried looking for info but I didn’t find much and accessing the various items without knowing what to look for isn’t much use.
Another strange thing, why if I put 600 MB as the memory cache size limit in the services graph do I see an occupation of 800MB?

My network hardware hasn’t changed in the last month, but moreover I don’t think it’s related to this aspect, since I see the same anomalies, that of the storm of requests, also on 3 other systems.

Have these 4 systems something in common? Equal applications, similiar purposes?

no nothing in common except that everyone has installed:
guardian
clamav
squidclamav

But before January I didn’t notice these anomalies, for example my graph of processes in memory regarding squid has always stood at 200 MB on average in the last year, only since January has it gone crazy.

I noticed, because on all four systems, navigation was slow and sometimes the fact was reported that some sites that maybe worked a few minutes before, the browser reported that they weren’t reachable.

I logged into ipfire, cleared the cache and everything worked again.

Hi,
I didn’t mean the IPFire systems, but the attached networks.
Squid handles the web requests of the LAN. Without these there will no problem. :wink:
Have you tried to switch off squidclamav? I don’t remember an issue with this at the moment, but you never know.

The networks are different, I have about 30 home automation devices, but they were there even in December, one is an office with 2 people and the other is my parents’ firewall.

I have squidclamav deactivated from 10 days just to understand if it was him

Maybe this and this helps.

There are other topics about clamav found by searching.

Thanks, but does clamav have any logs?
I’ve looked in /var/log but can’t find anything

In the wiki page link I gave earlier in this thread related to the cache it mentions that the cache memory value is not a fixed limit but under high load squid can override the value.

There is also a note that the cache memory value is not the total memory footprint of the cache.

1 Like