Memory runs full and stops services

Hello,

The following phenomenon, which I observe again and again lately. I have clamAV and a Tor node running in addition to the normal services like squid web proxy or OpenVPN server.
From time to time my 8GB RAM runs full and the kernel terminates either the OpenVPN server, Tor or clamAV. The OpenVPN server restarts, clamAV and Tor stay down.
So even if I have all services running, start the IDS as well it never uses more than 40% of the RAM. So how can it come to an overflow? In the log of the kernel is recorded what is done as a countermeasure when the memory is full. How this happens and how I can prevent it I don’t know, could someone help me to get to the bottom of it?

Thanks a lot!

You may not be an isolated case!?

Yes this looks very similar to my problem, i will bookmark the topic thank you!

Hi all,
@mumpitz did you checked which process triggers the OOM killer ?

Best,

Erik

1 Like

Hi, so when it comes to the process being killed, it seems to be different because Tor, clamAV and openVPN server have all been killed at times, sometimes all 3 processes in a row. I would say it always kills the programs that use the most memory, but I can’t say for sure.
If it helps I can look for the logs. Currently I have everything disabled and it runs nornal now, but would like to enable my services again.

Did you searched for the first line where the OOM appears
grep ' oom' /var/log/messages
and may also for the memory usage via WUI at that time ?

kernel September 15, 2022:

05:23:59	kernel: 	httpd invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom _score_adj=0
09:54:55	kernel: 	tor invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_s core_adj=0
09:58:26	kernel: 	vnstatd invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, o om_score_adj=0
21:43:22	kernel: 	guardian invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
21:51:29	kernel: 	guardian invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

That was now in one day, each time something different, before that it also happened. In addition, the weekly view from memory, a short time later I stopped all services to look for a solution first.

Hi,

just for the records, the upcoming Core Update 171 will contain Tor 0.4.7.10 (commit).

In the changelog, the maintainers noted:

- Remove OR connections btrack subsystem entries when the connections
 close normally. Before this, we would only remove the entry on error and
 thus leaking memory for each normal OR connections. Fixes bug 40604;
 bugfix on 0.4.0.1-alpha.

With regards to the ongoing (and indeed pretty annoying) DDoS attack against Tor, it may well be that this memory leak has some effect in your case.

Thanks, and best regards,
Peter Müller

2 Likes

Hi all,
@pmueller nice catch. Am searching there a little in the dark since we have had the same problems but with different processes. In here → IPFire went down last night, can't find cause - #12 by ummeegge triggers “openvpn-authen{tication}” the OOM-killer, in here → IPFire went down last night, can't find cause - #8 by bonnietwin too .
The scores reflects that → IPFire went down last night, can't find cause - #15 by ummeegge

Currently not sure what this is all about. Are there some changes in sysctl.conf which causes those events ?

Best,

Erik