Shipping logs to logstash

Hmm that is unfortunate.

I think I might return to the original plan of using filebeat in that case. I think is cleaner than using VPN and I already have the PKI in place with certs and logstash authing certs etc.

@ummeegge

Hey mate, I was unable to get Go working on ipfire I think it is not included and I don’t see it as a pakfire.

What I did find tho is someone has created a docker image that compiles the filebeat repo for raspberry pi. So in an external device I am building filebeat 7.11 using the output from that docker image, and I stick it in a github repo that I pull to ipfire and I run the compiled filebeat on ipfire.

It is working, but yea had no luck getting golang working on ipfire so gave up on the thought to compile on device.

And I am using certificates to encrypt and authenticate to logstash so this is so much better than the remote syslog option :smiley:

More info on this thread

Does logstash support remote syslog?

Hi @ms

Yes it does, and in fact I initially was using the remote syslog capability in ipfire to send to logstash. I configured suricata to dump into syslog and shipped it all to logstash using the inbuilt remote syslog capability.

The problem with this however is apparently the sysklogd being used does not support TLS, which means the logs are being sent in plain text, and also there is no authentication being done at logstash you have to leave it wide open so any john doe can start flooding you with fake logs unless you restrict by IP etc.

By using filebeat we can leverage a TLS connection to logstash which both encrypts the logs, and provides auth as only the cert I give to filebeat is allowed to send data to my logstash.

Yes that is true. We are on a kind of more traditional version of sysklogd which never was a problem because most people do not need any special logging requirements.

There are some developers working on replacing it with an alternative. Would you be able to contribute to/sponsor that?

Do you mean this? Recommended way to install filebeat - #6 by knightian

I’ve put my hand up to sit down and setup a build env and submit a pakfire for filebeat now that I have it working in my ipfire.

Although I will say, the forcing of default port 514 doing the rsyslog -> logstash and no way to change that port made it terribly bothersome, because it means making logstash and therefore java (not just the normal java as I found out, specifically the java bundled in with logstash) being able to run on a privileged port, I managed to get that sorted but it was rough. Of course we can NAT that problem away if we’re behind a NAT but that was a hurdle worth mentioning.

Yes, the current solution isn’t very flexible, but I would want to avoid to run any other software on the firewall. Increasing the software stack - especially as it needs Go or Java - is probably not very flexible and will increase memory consumption massively.

The technically easiest choice seems to be upgrading the syslog daemon so that it supports TLS and make that port configurable. Or did I miss anything?

The technically easiest choice seems to be upgrading the syslog daemon so that it supports TLS and make that port configurable. Or did I miss anything?

Yea if it’s an option, as long as the replacement syslog not only uses TLS for encryption by server supplied certificate, but also allows for the syslog client on ipfire to present a cert that gets validated by logstash in the TLS handshake (this may even be part of the TLS spec I don’t know) then really that, and port customisation should be all as you say. The formatting is done on logstash side, it’ll maleate it into JSON using specified grok patterns for elastic ingestion.

If you guys are planning to do that then I will just use my filebeat until that solution is ready and I won’t bother with the filebeat pakfire submission if it can and will be done with a new drop-in for syslog.

Happy to setup a test endpoint on my logstash and give certs for testing if wanted.

No it isn’t on the roadmap yet. We thought about it but there was no technical need to change.

So what should I do? Should I submit a pakfire for filebeat or not?

Hello @ms can I please get an answer because I do not want to go to the effort to submit a pakfire only for you to reject it.

Hello Ian,

as mentioned before, I understood that the easiest version would simply be to configure a syslog server using TLS to connect to AWS’ infrastructure somewhere. That would save you a lot of work in compiling and maintaining any additional software.

Did I get this right?

If you want to submit a patch that integrates additional software, you are of course welcome to do so. But we would require you to maintain it and regularly update it as well as deal with any bug reports that might be coming in for it. We cannot have unmaintained software that never gets updated on the firewall.

If you are the only user of this, this is a lot of work for very little benefit and therefore I would prefer the option above.

2 Likes

I understood that the easiest version would simply be to configure a syslog server using TLS to connect to AWS’ infrastructure somewhere.

That’s not what I understood. I thought you were talking about upgrading sysklogd in ipfire to a version that supports TLS and allowing custom port? Are you saying to setup another syslogd server to receive the unencrypted logs from ipfire and forward them on? That’s unideal if that’s what you are thinking. The whole point of stuff like Intune is so we can go serverless onprem, yes I know we can just use a little pi or something as the server but it is a backwards step and another thing to maintain. Far better to just ship directly from ipfire.

Or are you thinking that we should install something like rsyslog on ipfire and run it in addition to sysklogd?

That would save you a lot of work in compiling and maintaining any additional software.

I dunno would it? I mean it compiles in about 20 seconds at the moment in the docker container that spits out the arm executable.

If you are the only user of this, this is a lot of work for very little benefit and therefore I would prefer the option above.

I would not think I would be the only user of it given SOCs are all the rage now, and people seeing benefits of going serverless on prem. Maybe I am just ahead of the times :wink:

Thinking about this bit that I said:

Or are you thinking that we should install something like rsyslog on ipfire and run it in addition to sysklogd?

Maybe a pakfire for rsyslog might be better? I do see immense value in secure offloading of logs from the device directly to an aggregator/collector

No, one syslog daemon is enough. As mentioned, sysklogd is a simple implementation that is doing good services, but it cannot be extended by scripting like rsyslogd or syslog-ng can.

Compiling it isn’t the hard job :slight_smile:

I do not think that you are ahead of times. I just think that I am not exactly following you :slight_smile:

So, what is this all good for? Can you tell me the whole story about what you have on premise, what you want to achieve and if you are actually moving IPFire’s logs into a central logging server or what the whole goal of all of this is… Maybe I overlooked something, but I feel we are still not on the same page :slight_smile:

1 Like

Essentially think about managing the networks of many different companies, a law firm, an account practice and so on. All of these companies have ipfire devices on premises.

Sitting in the sky (Azure, AWS, GCS take your pick) is a logstash collector, the ipire device from all these different premises all send their logs to the logstash collector, which in turn funnels it to an elastic instance buried away not public facing, for ingestion and analysis.

Then kibana dashboards (and other alerting mechanisms) are showing things like suricata log events of interest for all the premises from the single interface.

Okay, that is what I thought. What interfaces does logstash provide?

logstash has no interface. It’s configured by a .yml file, you run it as a daemon and it literally just listens on your specified port (TCP or UDP), receives the logs and then in a .yml file you create some grok patterns to format the log into JSON so it is ‘elastic friendly’ and send it off to elastic from logstash.

Oh wait I get what you mean by interfaces

Just listens on TCP/UDP preferably a non privileged port (you can get it to work on say port 514, I did, but it was a massive work around), if using TCP you have the TLS stuff, set our ca cert, endpoint cert/key and also distribute a cert from the same ca to filebeat clients and enable certificate auth in logstash.

I did initially have ipfire sending syslogs to logstash on UDP 514 just as a proof of concept, of course this is not acceptible for real use. But it worked, logstash accepted the log, bundled it into a message field which i could then apply grok patterns to break it up into more defined JSON object if I wanted. But I did not proceed past this point once learning we had no TLS options in sysklogd, and that’s were i switched to filebeat.

Kibana sits on top of elastic and provides the dash boarding, and we can make our dashboard to suit our requirements in kibana easily, I have done this I have it in production.

All from open source components, a full blown SOC :smiley:

Of course ipfire logs are just one component of whats being fed into elastic to make up the entire SOC.