Weird http hang from Orange

Greetings,
I’ve found a WEIRD issue in my network and I’m stumped. I set up a webserver in the orange zone to host a few “big files”. I want them to be accessible over a plethora of connection types. Using the same 1.8GB ISO file for testing I can:

  • successfully download from Green and Red over SSH
  • successfully download from Green and Red over HTTPS
  • successfully download from Green and Red over FTP
  • successfully download from Green and Red over rsync

BUT! When trying to pull over HTTP, it gets 781,197,296 bytes in every time and hangs. If I try to resume the download (with something like wget -c or the like), it will just sit there and will not resume. However, it will happily start a new download until it gets just as far and hang again. I CAN complete the download over HTTPS, but NOT over HTTP.

I thought it was the webserver. Originally I set it up with Lighttpd. Nope. Both Nginx and Apache hang at the exact same place with HTTP (and all three work with HTTPS!).

I thought it was something with the server itself (although it is fine over all the other protocols). I copied the same ISO to a second host in the orange zone. It has the same problem; HTTP hangs but HTTPS works.

Checking if it is a Zone issue or not, I CAN wget from both servers over HTTP to any other host in the orange zone. However, no hosts in the Red nor Green zone can pull the ISO over HTTP to completion.

I figure this has to be something in IPFire as the only thing between the Orange and Green zones. But I don’t do caching, nor do I have QOS, and I don’t see anything in the firewall rules that would necessarily be blocking it either. I’ve trolled through the log files and I’m not seeing anything there either.

Anyone have any thoughts?
Thanks!

Found it. Bloody hell… I love what suricata provides me, but geez… when it breaks something it is not obvious that it is breaking things and there’s no great way inside of ipfire to track down what is in the rules much less anything that tells me that it is suricata that’s breaking things.

However, in my case after a lot of digging, watching, and testing I narrowed it down to this rule:

GPL ATTACK_RESPONSE command completed [**] [Classification: Potentially Bad Traffic]

:man_shrugging:
Still trying to figure out what the heck that rule is doing and why it thinks a Linux ISO being downloaded is considered “Potentially Bad Traffic”. However, disabling that rule allows the webserver to function as it is supposed to.

1 Like