Which loads in the hyper-v drivers at boot time or something?
And that’s pretty much all I did, lol.
Then I setup IPFire as per normal, for RED I told it to use DHCP to get an IP, and for GREEN I set a static that was non-conflicting and in the IP range of my VNET in Azure. I turned DHCP of the IPFire off (because Azure is doing it for the moment).
Once setup was complete I powered the VM down.
Then I created a storage container on Azure and I used AzCopy to copy the VHD into Azure. Instructions for this here
Then in Azure I created a Managed Disk from the VHD (storage blob) and from that Managed Disk I then created a VM with 2 NIC interfaces. I used Gen 1 type VM, not sure if Gen 2 will work? Probably it will.
In Azure, I assigned the same static IP I gave to the GREEN NIC during setup, to the second NIC on the VM.
Booted up the VM, bingo, I have web interface access from another machine on the same network as the GREEN NIC.
Now I can set about making S2S link to my on-prem IPFire and get rid of the junk Azure basic gateway., and actually even save some of my Azure credits doing this, lovely
Yes, IPFire runs on Azure. We made loads of changes to make that actually happen.
However, I would recommend against changing the initramdisk because it will be overwritten with the next update. Manually configuring the system also works, but you will probably not pick up any automatic configuration changes from the hypervisor.
Nice! I thought that might have been the case as I didn’t put in any of the kernel patches or anything and it still worked to my surprise, I actually wasn’t expecting it to work like that out of the box so to speak so that’s a nice bonus.
Interesting, I’m seeing the same throughput to an Azure Basic Gateway that I was to IPFire running on Azure. Azure Basic Gateway rated for 100mbits.
It’s not real great. My connection 50/12 so I should not be getting only 25 max on the download, maybe IPSec does not give good performance on IPFire no matter where it is, is that what you suggest? Because I got exact same result connecting to IPFire virtualised in Azure as I do with my on-prem IPFire connecting to the Azure Gateway instead. My IPFire is a Quad core J3160 (Has AES-NI) with 8GB RAM and is hardly breaking a sweat.
MTU is 1400 as per the recommendation from Microsoft, other than that settings are pretty much whatever IPSec chooses, so something doesn’t seem right I think?
Even tho I had ports 4500 udp and 500 udp assigned to the VPN class and the maximum set to the 50mbits it still kills my IPSec tunnel when using QoS.
I think maybe the QoS is doing some funky fragmentation or something? Is it possible to bypass the QoS for the IPSec tunnel? Because it’s proper killing it.
I’ve classified iperf and ipsec all in VPN so the traffic is all 1 category, there is no other significant traffic happening according to the graph, yet QoS does not allow the VPN class to achieve what I set as the maximum. The only way I can get maximum is if I turn QoS off completely.
Through IPSec? That’s pretty cool! Yea I guess it’s this QoS bug I’m facing that is killing me no matter where I run it. With the QoS off it runs fine for my link. Shame I killed the Azure one I could have retested. Maybe I will at some point, at least I know how to do it now.