Shipping logs to logstash

I understood that the easiest version would simply be to configure a syslog server using TLS to connect to AWS’ infrastructure somewhere.

That’s not what I understood. I thought you were talking about upgrading sysklogd in ipfire to a version that supports TLS and allowing custom port? Are you saying to setup another syslogd server to receive the unencrypted logs from ipfire and forward them on? That’s unideal if that’s what you are thinking. The whole point of stuff like Intune is so we can go serverless onprem, yes I know we can just use a little pi or something as the server but it is a backwards step and another thing to maintain. Far better to just ship directly from ipfire.

Or are you thinking that we should install something like rsyslog on ipfire and run it in addition to sysklogd?

That would save you a lot of work in compiling and maintaining any additional software.

I dunno would it? I mean it compiles in about 20 seconds at the moment in the docker container that spits out the arm executable.

If you are the only user of this, this is a lot of work for very little benefit and therefore I would prefer the option above.

I would not think I would be the only user of it given SOCs are all the rage now, and people seeing benefits of going serverless on prem. Maybe I am just ahead of the times :wink:

Thinking about this bit that I said:

Or are you thinking that we should install something like rsyslog on ipfire and run it in addition to sysklogd?

Maybe a pakfire for rsyslog might be better? I do see immense value in secure offloading of logs from the device directly to an aggregator/collector

No, one syslog daemon is enough. As mentioned, sysklogd is a simple implementation that is doing good services, but it cannot be extended by scripting like rsyslogd or syslog-ng can.

Compiling it isn’t the hard job :slight_smile:

I do not think that you are ahead of times. I just think that I am not exactly following you :slight_smile:

So, what is this all good for? Can you tell me the whole story about what you have on premise, what you want to achieve and if you are actually moving IPFire’s logs into a central logging server or what the whole goal of all of this is… Maybe I overlooked something, but I feel we are still not on the same page :slight_smile:

1 Like

Essentially think about managing the networks of many different companies, a law firm, an account practice and so on. All of these companies have ipfire devices on premises.

Sitting in the sky (Azure, AWS, GCS take your pick) is a logstash collector, the ipire device from all these different premises all send their logs to the logstash collector, which in turn funnels it to an elastic instance buried away not public facing, for ingestion and analysis.

Then kibana dashboards (and other alerting mechanisms) are showing things like suricata log events of interest for all the premises from the single interface.

Okay, that is what I thought. What interfaces does logstash provide?

logstash has no interface. It’s configured by a .yml file, you run it as a daemon and it literally just listens on your specified port (TCP or UDP), receives the logs and then in a .yml file you create some grok patterns to format the log into JSON so it is ‘elastic friendly’ and send it off to elastic from logstash.

Oh wait I get what you mean by interfaces

Just listens on TCP/UDP preferably a non privileged port (you can get it to work on say port 514, I did, but it was a massive work around), if using TCP you have the TLS stuff, set our ca cert, endpoint cert/key and also distribute a cert from the same ca to filebeat clients and enable certificate auth in logstash.

I did initially have ipfire sending syslogs to logstash on UDP 514 just as a proof of concept, of course this is not acceptible for real use. But it worked, logstash accepted the log, bundled it into a message field which i could then apply grok patterns to break it up into more defined JSON object if I wanted. But I did not proceed past this point once learning we had no TLS options in sysklogd, and that’s were i switched to filebeat.

Kibana sits on top of elastic and provides the dash boarding, and we can make our dashboard to suit our requirements in kibana easily, I have done this I have it in production.

All from open source components, a full blown SOC :smiley:

Of course ipfire logs are just one component of whats being fed into elastic to make up the entire SOC.

But also think of workplaces with no servers. Using Intune + Azure AD for device & user management. SharePoint for files, doing away with needing server infra on prem and so having a server to forward things on is not really wanted in these situations.

Hi all,
have build Rsyslog longer time ago to enable the already available TCP option in the WUI. This development → git.ipfire.org Git - people/ummeegge/ipfire-2.x.git/commit is a kind of old but should only be updated.

If interested, feel free to grab it. TLS isn´t activated in the config but should not be a problem at all.

Best,

Erik

1 Like

Nice man you rock!

Why did this never make its way into the shipped ipfire images?

I have send it to the dev mailinglist at that time you can find the discussion in here → [PATCH] Rsyslog: New advanced syslogger [TESTING] . It was mainly a coordination problem i think.

Best,

Erik

From what I can see the building and adding of rsyslogd binary is another step from this right?

Oh I see in make.sh it will build the rsyslogd binary when making the ipfire image.

Very interesting, maybe I should be learning to fork, merge in this commit and build my own ipfire images.

Yes, this is what I meant by interface. Basically: How do I talk to Logstash.

I assumed you would connect to something that is running in the cloud, but you are referring to the local service (the big chunky Java one) where you can push your logs into… And whatever that does does not really matter to us.

I don’t really get why you would use TLS on localhost, but hey.

Why not?

Yes, logstash is in the cloud. Filebeat I am using locally on ipfire to send the logs to Logstash.

I think filebeat is the chunky java thing that you mean. I think you are confusing me thinking that logstash is on ipfire, it’s the thing in the cloud. Filebeat is the thing that sends to logstash. Sysklogd also sends to logstash on UDP 514 but is unencrypted and so obviously not acceptable to be sending like this unless setting up a VPN between the two etc which is overkill, probaby the overhead from OpenVPN alone (I guess maybe using ipsec would address this) would negate any gains made by avoiding filebeat I’d reckon, which is why a native syslogd implementation with TLS is the best choice I think.

Here is Diagram:

IPFire + Filebeat → Internet → Logstash → Elastic

1 Like

I enable TLS on all the things out of principle, especially when running in a cloud environment even tho it’s localhost I prefer the movement of 1 and 0 to be encrypted. But it this case I can see I have confused you.

I was sending from sysklogd to logstash (cloud) unencrypted over the internet, UDP/514, which is obviously not good to do. I set that up as a PoC so I could ascertain that yes logstash is capable of receiving logs directly from syslog, it is. Also I believe it’s poor form to set things like logstash to use a privileged port like 514. I’m unsure why though.

To remedy that case, I compiled filebeat for rpi and put it on my ipfire, and now filebeat is sending syslogs and suricata logs to logstash over TCP/xxxxx with TLS

Although a point in favour of rsyslog, I believe it can process the logs as JSON so that would reduce processing time on the logstash side. @ms is there any chance of getting Erik’s commit for rsyslog implemented above merged into the next build? I would think that would be perfect tbh, assuming it is working still.

But then maybe that is for me to learn to compile image myself with this code.

Not to beat a dead horse, but I just noticed that the June 7, 2021 meeting discussed this.

Under the heading " Migrating towards modern software" we find:

  • sysklogd → rsyslogd: Erik attempted this, but project stalled

“Stalled” sounds different to me from “coordination problem”, but I’d like to reiterate that would be really great if this work could be integrated.

Thanks.

1 Like

I can’t get this to work, I have a local ELK VM, from SANS, and I can not seem to get remot logs to comew in at all even after remote settings on the client side. can you get into detail?