Upgrade 181 can't restart error -> setfont: ERROR setfont.c:417

I have installed core160 for testing and the upgrade to c181 that still works. Im running out of ideas.

My original IPFire is from 2018-06-26. What core was at the time?

i think my installations are older. I can not find any hint but i think the “newest” not working is from 2017. The ovenvpn host zert. is from Not Before: Jul 23 22:49:17 2017
The oldes istallation is from 2015 and all do not boot after upgrade.
A new test installieren from this summer works
this is stage.

Maybe CU 120…

If your IPFire’s were installed 2018 and earlier and have never been re-installed then it is likely that your boot partition is not large enough for the kernel etc that needs to fit in it.

Also from 2018 and earlier there was a separate /var and root partition and the root partition is likely to be getting full as well, although that should flag up in the upgrade and prevent the upgrade occurring, which is not happening in your cases.

The boot partition is more complicated because it is not easy to calculate the size of everything that will end up in it.

What does the command df -hl show for your systems?

This is the output of df -hl

Filesystem Size Used Avail Use% Mounted on
devtmpfs 720M 4.0K 720M 1% /dev
tmpfs 740M 0 740M 0% /dev/shm
tmpfs 740M 380K 740M 1% /run
/dev/sda3 31G 2.2G 28G 8% /
/dev/sda1 110M 51M 51M 50% /boot
/var/lock 8.0M 12K 8.0M 1% /var/lock
none 228M 240K 228M 1% /var/log/vnstat
none 328M 26M 302M 8% /var/log/rrd

[root@dus-fw2-x64 ssh]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 468M 4.0K 468M 1% /dev
tmpfs 489M 12K 489M 1% /dev/shm
tmpfs 489M 516K 488M 1% /run
/dev/sda3 2.0G 1.6G 205M 89% /
/dev/sda1 306M 51M 242M 18% /boot
/dev/sda4 5.5G 3.3G 2.0G 62% /var
/dev/sdb1 2.0G 3.5M 1.9G 1% /opt/pakfire
/var/lock 8.0M 12K 8.0M 1% /var/lock
none 489M 30M 459M 7% /var/log/rrd
none 489M 388K 488M 1% /var/log/vnstat

i bootet the upgraded non woking system in a live linux. i think the problem is not the disk space

/dev/sda3 2.0G 1.7G 181M 91% /mnt
/dev/sda1 306M 51M 233M 18% /mnt/boot
/dev/sda4 5.5G 3.3G 2.0G 63% /mnt/var

@tom46149 in your case the boot disk space is not the problem.

You do have the separate root and /var partitions which were removed from new installations from Core Update 141 onwards but the boot partition at that time was made 128MB to replace the 64MB that was present in Core Update 140 and previously.

You have a 306MB boot partition, so you must have adjusted your partitions manually at some time. So 306MB is large enough to not give a problem.

Your root partition (sda3) only has 205MB free on it. While that was enough for the recent update to CU181, it will eventually not be enough and the update will fail.

If you don’t want to re-install then you should consider to manually move some space from the /var partition to the / partition.

In conclusion the problem you are experiencing is not related to disk space.

@michaelkocum you have the partition system from after Core Update 140 ( around 2020) so you must have re-installed after that time period to have that partition structure. It cannot be done via the Core Updates as the partitions have to be unmounted to change them, which can’t be done in a Core Update.

Your boot partition is based on 128MB and currently for ext4 based systems 256MB is now created with a fresh install as some installation end up requiring more than the 128MB but in most cases 128MB is still enough.

If you had an installation from 2017 or earlier then the boot partition would have only been 64MB, definitely too small. However you have 128MB and 50% free on your boot partition, so I don’t believe that your boot partition is too small and causing the problem you are experiencing.

Note that booting fails only with the VirtIO SCSI driver. Booting with the IDE driver works fine.

So disk space may not be the problem. Maybe the driver is missing something that was removed in core 181.

I did a fresh install of core 141 and upgraded up to 181.
Unfortunately, the error did not occur.

the problem must be in udev

found this in the logs during the update

Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[11607]: RUN{builtin}: ‘firmware’ unknown /lib/udev/rules.d/50-firmware.rules:3
Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[11607]: specified group ‘input’ unknown
Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[11607]: specified group ‘kvm’ unknown
Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[11607]: RUN{builtin}: ‘uaccess’ unknown /lib/udev/rules.d/73-seat-late.rules:16
Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[11607]: IMPORT{builtin}: ‘net_setup_link’ unknown /lib/udev/rules.d/80-net-setup-link.rules:9
Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[12967]: expect: kmod load
Dec 12 22:21:24 dus-fw2-x64 last message repeated 2 times
Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[12968]: expect: kmod load
Dec 12 22:21:24 dus-fw2-x64 kernel: <27>udevd[12968]: expect: kmod load

That udev cannot load kernel modules for the not yet bootet kernel is normal. Non of them should result in a none working initramdisk.

I have reproduced the error with core120 (old partition layout) but the partition is not the problem. (I have removed the var partition and resized boot and root before upgrades) There must be old or wrong file somewhere but i have not found it yet.

2 Likes

After many Installation tries of old versions and comparing i have found the culprit…
The is a old udevd binary that dracut find first and put it in the initrd.
The error is fixed in master (core182)

You can also update to core181 if you delete the file.
rm /lib/udev/udevd
before update to core181

7 Likes

I can confirm that upgrading from 179 to 182 works.

I can confirm that rm /lib/udev/udevd bevor update to 181 works
thanks for fixing