Install on RAID1


New user here and also new user of IPFire!
I’ve been using IPFire for testing purposes for about a month
on bare metal hw and it’s been working great. 0 issues basicly.

As i feel like this is a long runner i wanted to setup RAID1
incase one drive goes bad i have a working IPFire untill i replace it.
Also i will not need to reinstall anything. Just a smooth swap.
Is this a good idea or should i go with single drive and backup config?

I booted up install USB in UEFI mode and install went all fine untill
it was time to install GRUB bootloader. This was a no go.
Installer just showed msg “Failed to install bootloader”

I rebooted and disabled UEFI and went thru installer again, and this
time installer went all good.
I noticed that the raid array was stopped (in second tty) upon finished install and installer was waiting for me to reboot.
I restarted the array with “mdadm --assemble --scan” and let it finish syncing.
Once it was done syncing i rebooted but IPFire failed to start.
No boot disk was found.
I went back to BIOS settings and enabled UEFI again and IPFire could
boot with functioning raid1 setup. Nice i was thinking!

I even tried shutting down. Removing one of the ssds and it booted up
all fine on one ssd only.
When i put the drive back i had to “add” it to the array again and it synced
all good and was back to normal. Nice!

But i did notice some errors.
Checking the disk partitions with fdisk i can see that on both drives
i get an error about partitions.

“GPT PMBR size mismatch (976772863 != 976773167) will be corrected by write. The backup GPT table is not on the end of the device”
The SSDs i use are not same brand/model but they do have same amount of sectors.

The partitions was made by the installer so i feel like i have little or no controll of partition sizes during installation process.

Is this something to worry about and should be fixed or could i just ignore?
If it is fixable or should be fixed, can someone point me in proper direction
on how to fix?
I can boot a live cd and maybe fix from there?


I will just say that getting up and running from a backup config is very fast. Throw a new drive in, install a fresh copy of IPFire, and restore the backup. Usually 15 minutes or so. If you periodically check Status->Media->SMART Information, you can get an idea of drive health and preemptively replace a drive on your schedule.

If you want to continue with RAID1, that might be slightly faster in the event of a drive failure. But I don’t know enough about it to answer your other questions. Maybe someone else here will.

Best of luck!

Thank you for the reply.
I did however boot into a live usb to check out the partitions hoping to be able
to resize with gparted but the raid array was made directly on the sda/sdb devices without any partitions so i could not alter those.

Since nobody else replied on this matter i guess my only option is to go for one
drive and restore config incase of drive errors.

As you say this should be done in no time.


I have found IpFire RAID1 to be non-working. That also applies to Intel Raid, as this interacts with MDRAID in some mysterious way. The solution for larger computers with space for extension cards is probably to use a “hardware” fakeRAID card. Also virtualizing IPFire on a RAID system is a functional solution.

If you want to have IPFire installed on a raid array then make sure you have two hard disks installed on the system and when it comes to select the hard disk, select both of the hard disks and IPFire will create a RAID-1 array for the installation.

You can check the status of the raid array by selecting the WUI menu Status - Mdstat

I have tested the raid1 installation and boot some days before (but it was a legacy only system), with a nightly build of the next tree and there are no relevant changes to the current stable.

I dont know if you missed this part in my first post but the installation was made using the installer. There is no way of choosing raid1 unless you have
2 drives connected so that part is pretty foolproof.
Problem is how raid array is made by the installer. At least on my hardware.
I have not tried this in a virtual environment or another hardware.

I used latest stable Core-Update 185 when i was trying this and only way
to be able to go thru grub install is in legacy mode.
Installer managed to finish but i could not boot untill i switch back
to UEFI mode.
I am in no way an expert but reading from the error i got it looks like
the way md array was made might be wrong.