Memory runs full and stops services

Hello,

The following phenomenon, which I observe again and again lately. I have clamAV and a Tor node running in addition to the normal services like squid web proxy or OpenVPN server.
From time to time my 8GB RAM runs full and the kernel terminates either the OpenVPN server, Tor or clamAV. The OpenVPN server restarts, clamAV and Tor stay down.
So even if I have all services running, start the IDS as well it never uses more than 40% of the RAM. So how can it come to an overflow? In the log of the kernel is recorded what is done as a countermeasure when the memory is full. How this happens and how I can prevent it I don’t know, could someone help me to get to the bottom of it?

Thanks a lot!

You may not be an isolated case!?

Yes this looks very similar to my problem, i will bookmark the topic thank you!

Hi all,
@mumpitz did you checked which process triggers the OOM killer ?

Best,

Erik

1 Like

Hi, so when it comes to the process being killed, it seems to be different because Tor, clamAV and openVPN server have all been killed at times, sometimes all 3 processes in a row. I would say it always kills the programs that use the most memory, but I can’t say for sure.
If it helps I can look for the logs. Currently I have everything disabled and it runs nornal now, but would like to enable my services again.

Did you searched for the first line where the OOM appears
grep ' oom' /var/log/messages
and may also for the memory usage via WUI at that time ?

kernel September 15, 2022:

05:23:59	kernel: 	httpd invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom _score_adj=0
09:54:55	kernel: 	tor invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_s core_adj=0
09:58:26	kernel: 	vnstatd invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, o om_score_adj=0
21:43:22	kernel: 	guardian invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
21:51:29	kernel: 	guardian invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

That was now in one day, each time something different, before that it also happened. In addition, the weekly view from memory, a short time later I stopped all services to look for a solution first.

Hi,

just for the records, the upcoming Core Update 171 will contain Tor 0.4.7.10 (commit).

In the changelog, the maintainers noted:

- Remove OR connections btrack subsystem entries when the connections
 close normally. Before this, we would only remove the entry on error and
 thus leaking memory for each normal OR connections. Fixes bug 40604;
 bugfix on 0.4.0.1-alpha.

With regards to the ongoing (and indeed pretty annoying) DDoS attack against Tor, it may well be that this memory leak has some effect in your case.

Thanks, and best regards,
Peter Müller

2 Likes

Hi all,
@pmueller nice catch. Am searching there a little in the dark since we have had the same problems but with different processes. In here → IPFire went down last night, can't find cause - #12 by ummeegge triggers “openvpn-authen{tication}” the OOM-killer, in here → IPFire went down last night, can't find cause - #8 by bonnietwin too .
The scores reflects that → IPFire went down last night, can't find cause - #15 by ummeegge

Currently not sure what this is all about. Are there some changes in sysctl.conf which causes those events ?

Best,

Erik

Hi, is there any news here? My memory is still constantly running full, there is always some other process involved. Is there something fundamentally wrong in the memory allocation, is there a process that regulates the memory allocation? Maybe look for a bug there? I can help with information, just tell me what to do, it can’t get any worse than it is now.
I have now started to uninstall all addons squidcalmav, tor, clamav, depending on what was killed.
But it can’t be that after some time the memory runs full for no apparent reason.
I could also observe that the memory usage over time increases a little bit, until it comes to a sudden increase, also that the increase flattens out again and only at a later point in time it goes fully up and randomly kills a process.
This condition is terrible, you never know which service will be terminated or when.
I’m open for any solution and also make myself available as a guinea pig to test ideas, the main thing is that finally stops?
Thank you!

Hi, just a question ( haven’t reread the whole thread now ):
Does the problem occurs without the TOR node also?
TOR may be a process introducing temporary huge memory requests, which kill other services.

Hi, tor I have now just uninstalled last, the memory usage is just very low, but now after a few hours, you can perceive a slight constant increase, as before. Now it is to wait and see if an overflow also happens here.
Is this a known bug with tor?
Is there a fix or workrouns that can be applied?
I would like to continue to provide the Tor node.
Thank you!

Hi!
Today i get also a full running Memory with-> httpd invoked oom-killer:
So tor was not running nor even installed on my system.
It must be an other issue.
Any ideas what I can do?

I get the same thing, but my logs only show that it’s killing the openvpn-authent process. I’m using Core-Update 171. I do not have tor nor squid installed.

Hallo @mikehand

Welcome to the IPFire community.

Your problem is different to the one from @mumpitz as it is only realted to one package hitting the oom killer.

A bug has already been raised for the problem with openvpn-authenticator
https://bugzilla.ipfire.org/show_bug.cgi?id=12963

1 Like

squid is not an addon. It is the web proxy, which you can find on the menu item Network - Web Proxy. This a core part of the IPFire installation. It may not be running if you have not configured it and pressed the “Save and Restart” button.

1 Like

Since 5 days all looks fine, but half of my system is deactivated,