LVM not properly working after update to IPFire 2.27 (x86_64) - core175

I’ve got an IPFire system at home, on which i use LVM on an additional disk that holds my Samba shares.
Today i ran the update to »core175«. After reboot the system wouldn’t come up again.


The Wait for devices used in fstab ... point would take quite long, finally it would fail as shown in the picture.

I commented out the LVM Volumes in fstab, and the system started again.

blkid would recognize my disk as LVM2_member, but omit the LVs.
vgdisplay would show the expected output though.
It appears that the LVs are »NOT available« (lvdisplay) or »inactive« (lvscan):

[root@ipfire ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg0/lv1
  LV Name                lv1
  VG Name                vg0
  LV UUID                INZqC7-JaQq-lCww-Ec2G-nLWX-kYvZ-hYxth2
  LV Write Access        read/write
  LV Creation host, time ipfire.localdomain, 2023-06-29 14:10:26 +0200
  LV Status              NOT available
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/vg0/lv2
  LV Name                lv2
  VG Name                vg0
  LV UUID                YNaISQ-oNRf-cZsd-APaM-IVRe-W8i0-7p9V1z
  LV Write Access        read/write
  LV Creation host, time ipfire.localdomain, 2023-06-29 14:13:30 +0200
  LV Status              NOT available
  LV Size                1020.00 MiB
  Current LE             255
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

[root@ipfire ~]# lvscan
  inactive          '/dev/vg0/lv1' [1.00 GiB] inherit
  inactive          '/dev/vg0/lv2' [1020.00 MiB] inherit
[root@ipfire ~]#

vgchange --activate y vg0 would activate the Volumes, in order that they are usable again.

Is that behaviour expected?
What could be done, to activate the LVs at boot time again?


By the way, the issue is reproducible.

Before atempting any comand backup your data on LVM’s. Not an expert on LVM’s but what do logs say? devices are functioning after boot, just don’t start with it!


To verify the errors and check the logs related to LVM, you can use the journalctl command. Here’s how you can do it:

  1. View the system logs related to LVM:

#journalctl -u lvm2-lvmetad.service
#journalctl -u lvm2-monitor.service

This will display the logs specifically for the LVM service and monitor. Look for any error messages or warnings that might indicate the cause of the LVM not starting at boot.
2. If you don’t find any specific logs related to the LVM services, you can check the general system logs for any relevant messages:

#journalctl -xe

This command will display the system logs, including any errors or warnings from various services. Look for any entries that might be related to LVM or the logical volumes.
3. Additionally, you can check the /var/log/messages file for any LVM-related errors or warnings:

#cat /var/log/messages | grep lvm

This command will filter the log file for any lines containing “lvm” and display them. Look for any relevant error messages or warnings.

By examining the logs, you should be able to identify any specific errors or issues that are preventing LVM from starting at boot. If you encounter any error messages that you’re unsure about, feel free to provide them here,
This was the help from chat GPT, sometimes it misses comand instructions, but I didn’t tried on my system, just pointing a helpfulway
Backup backup backup before

It is a bug due to a change in how lvm uses udev to automatically load existing LV’s The LVM udev rule now only works with systemd which IPFire does not have.

A fix has been applied into the IPFire git repo and is now available in CU176 testing which was released today.



thank you, it works! :smiley:

Didn’t patch the whole system yet but applied Mr. Weismüller’s fix for now.
It works!
Thanks a lot for all your efforts!


1 Like