Trying to understand why squid saturates the process memory,
I noticed looking at the proxy log that squid seems to go crazy,
for about 30 seconds I find this in the log:
In most cases, this means the client opened a TCP connection to a Squid
listening port and then closed it without sending the HTTP headers. To
figure out who is at fault, you need to figure out who is making these
connections to Squid and why they are closing them without sending HTTP
headers (if that is what they are actually doing).
my understanding is that IPTables redirect to squid on port 3128 the requests for port 80, and squid, if it has an intercept directive in the config, will forward the traffic. For some reason, the HTTP header is not transmitted to squid, either by the firewall (I doubt it), or the firewall is not receiving the header.
If you use curl from IPFire, do you get the same message?
It must come from the incoming traffic. Do not have an hypothesis, besides a vague DNS problem. Can you cross check the DNS logs in the same time intervals?
Let’s think this through. A client in green or blue requests a connection with a web site. Unbound fetches the IP, the request from the client is packaged and sent on port 80 to the kernel, which redirects to Squid on port 3128, which goes to fetch the page but does not receive the HTTP header from the target, triggering a storm of repeated requests from the client. At the very minimum you should see the kernel logs for the port 80 to port 3128 during that time frame.
Do you agree with this?
EDIT: when the HTTP download is successful, I think you should see also the return traffic from 3128 to 80 in the kernel logs.
what version of squid is currently implemented in ipfire?
I would like to open a ticket on the squid.org portal, I found a ticket with an error that prevented browsing on https ports, but as I’m not an expert I can’t figure out if it’s right for me.
You could but without some information as to what is wrong in IPFire that is causing it it will be difficult to figure out what to go and fix.
I don’t see those log messages that you are getting. If other people report the same thing then we might be able to figure out what the common setup is and see if it can be replicated on another machine not having the problem. Once able to replicate then a developer can investigate it one of their machines.
I am not saying that you shouldn’t raise an IPFire bug but I think you need to be clear that IPFire is the cause of the problem. Raising a bug at both squid and IPFire is not a good idea. Both of them are unlikely to be the root cause at the same time.
You could try re-installing Core Update 171 on one of the machines having the problem and restore the settings and then see if that one no longer has the problem when the others still do.
That would then be a good indication that IPFire is somehow linked to it.
looking at the squid log, I realized that a few seconds before the request storm starts, the log reports of cache.log:
2023/02/07 08:49:31 kid1| WARNING! Your cache is running out of file descriptors listening port: 192.168.12.1:3128
2023/02/06 09:08:01 kid1| WARNING! Your cache is running out of file descriptors listening port: 192.168.12.1:3128
This appears in all the firewalls I’ve monitored
According to the wiki page on the web proxy this can occur if you have a large number of clients or high/unusual traffic. A limit is then reached where no more cache files can be opened.
It then suggests increasing the number of file descriptors by 1024 or more. Increasing this number will increase the memory consumption.
From your earlier info it sounds like you don’t knowingly have high/unusual traffic so while the above might mitigate the problem, if you have high traffic levels which you don’t expect or know of then it doesn’t solve the problem.
Might be worth increasing the file descriptors entry and see what happens. See if the WARNING about file descriptors no longer occurs and also the 16,000 odd entries in the log.
You can’t reset to Core Update 171. You have to take a backup and save it on another computer and then do a fresh install from scratch and then restore the backup.
I will be prepping a new car with the 171 this weekend.
For the descriptors I tried to increase them as the wiki says.
I tried with the command grep “WARNING” /var/log/squid/cache.log
and after augmenting the descriptors it doesn’t report the warning.
But couldn’t the cache manager help?
I tried looking for info but I didn’t find much and accessing the various items without knowing what to look for isn’t much use.
Another strange thing, why if I put 600 MB as the memory cache size limit in the services graph do I see an occupation of 800MB?
My network hardware hasn’t changed in the last month, but moreover I don’t think it’s related to this aspect, since I see the same anomalies, that of the storm of requests, also on 3 other systems.