Strange squid process behavior

Trying to understand why squid saturates the process memory,
I noticed looking at the proxy log that squid seems to go crazy,
for about 30 seconds I find this in the log:

08:49:57 - error:transaction-end-before-headers
08:49:57 -
08:49:57 -
08:49:57 -
08:49:57 -
08:49:57 -
08:49:32 -

I have three different firewalls and they all have the strange behavior,
at different times but in the log I always find the one indicated above.

Do you have any idea what this is about?


I found this post

In most cases, this means the client opened a TCP connection to a Squid
listening port and then closed it without sending the HTTP headers. To
figure out who is at fault, you need to figure out who is making these
connections to Squid and why they are closing them without sending HTTP
headers (if that is what they are actually doing).

I also saw people attributing this to DNS issues.

1 Like

Three firewalls on three different wan connections all three behaving the same way?

Even if all three have the same internet privilege in common.

Then about 16000 requests in seconds?
If I understand correctly it seems that it is the same firewall that sends the request right?


my understanding is that IPTables redirect to squid on port 3128 the requests for port 80, and squid, if it has an intercept directive in the config, will forward the traffic. For some reason, the HTTP header is not transmitted to squid, either by the firewall (I doubt it), or the firewall is not receiving the header.

If you use curl from IPFire, do you get the same message?

ok but why does this happen only once a day and always at different times and for about 30 seconds?

It must come from the incoming traffic. Do not have an hypothesis, besides a vague DNS problem. Can you cross check the DNS logs in the same time intervals?

In the Unbound DNS log I have nothing in that hour in any of the three firewalls

what about:

grep "08:49:57" /var/log/messages

anything helpful in the kernel log in that time?

for that time today it doesn’t report anything to me on the firewall of my house

Let’s think this through. A client in green or blue requests a connection with a web site. Unbound fetches the IP, the request from the client is packaged and sent on port 80 to the kernel, which redirects to Squid on port 3128, which goes to fetch the page but does not receive the HTTP header from the target, triggering a storm of repeated requests from the client. At the very minimum you should see the kernel logs for the port 80 to port 3128 during that time frame.

Do you agree with this?

EDIT: when the HTTP download is successful, I think you should see also the return traffic from 3128 to 80 in the kernel logs.

I agree, but I don’t see anything. tonight I try to make a request and see what happens on the log then I’ll update you.

you may want to try:

grep ":49:57" /var/log/messages

I also tried with grep “:49:57” /var/log/messages, but nothing, around 8 in the morning I have no message in the log.

Today nothing has happened yet for the moment

what version of squid is currently implemented in ipfire?

I would like to open a ticket on the portal, I found a ticket with an error that prevented browsing on https ports, but as I’m not an expert I can’t figure out if it’s right for me.

This a link

In any case, the anomalous behavior occurs on all the firewalls that I monitored, and it has appeared since January 2023.

Would opening a support case on ipfire’s bugzilla make sense?

5.7 which is the latest release.

You could but without some information as to what is wrong in IPFire that is causing it it will be difficult to figure out what to go and fix.

I don’t see those log messages that you are getting. If other people report the same thing then we might be able to figure out what the common setup is and see if it can be replicated on another machine not having the problem. Once able to replicate then a developer can investigate it one of their machines.

I am not saying that you shouldn’t raise an IPFire bug but I think you need to be clear that IPFire is the cause of the problem. Raising a bug at both squid and IPFire is not a good idea. Both of them are unlikely to be the root cause at the same time.

You could try re-installing Core Update 171 on one of the machines having the problem and restore the settings and then see if that one no longer has the problem when the others still do.
That would then be a good indication that IPFire is somehow linked to it.

Thanks for the reply, how do I reset the 171?

looking at the squid log, I realized that a few seconds before the request storm starts, the log reports of cache.log:
2023/02/07 08:49:31 kid1| WARNING! Your cache is running out of file descriptors listening port:
2023/02/06 09:08:01 kid1| WARNING! Your cache is running out of file descriptors listening port:

This appears in all the firewalls I’ve monitored

I don’t know if it can be useful

According to the wiki page on the web proxy this can occur if you have a large number of clients or high/unusual traffic. A limit is then reached where no more cache files can be opened.

It then suggests increasing the number of file descriptors by 1024 or more. Increasing this number will increase the memory consumption.

See wiki page of file descriptors

From your earlier info it sounds like you don’t knowingly have high/unusual traffic so while the above might mitigate the problem, if you have high traffic levels which you don’t expect or know of then it doesn’t solve the problem.

Might be worth increasing the file descriptors entry and see what happens. See if the WARNING about file descriptors no longer occurs and also the 16,000 odd entries in the log.

You can’t reset to Core Update 171. You have to take a backup and save it on another computer and then do a fresh install from scratch and then restore the backup.


I doubt it, as the transparent proxy in IPFire is only on port 80.

1 Like

I will be prepping a new car with the 171 this weekend.

For the descriptors I tried to increase them as the wiki says.
I tried with the command grep “WARNING” /var/log/squid/cache.log
and after augmenting the descriptors it doesn’t report the warning.
But couldn’t the cache manager help?
I tried looking for info but I didn’t find much and accessing the various items without knowing what to look for isn’t much use.
Another strange thing, why if I put 600 MB as the memory cache size limit in the services graph do I see an occupation of 800MB?

My network hardware hasn’t changed in the last month, but moreover I don’t think it’s related to this aspect, since I see the same anomalies, that of the storm of requests, also on 3 other systems.

Have these 4 systems something in common? Equal applications, similiar purposes?