Raid0 array is not visible

Hello!

I have an asrock b760 pro base.
2 SATA HDDs
1 nvme ssd.

The system is installed on nvme ssd.

I set the two HDDs in BIOS to be in a RAID0 array.

IPFire does not see the RAID0 array, it only sees 2 separate HDDs.
sdb and sdc…

[root@wrouter ~]# cat /proc/partitions
major minor #blocks name

    1 0 16384 ram0
    1 1 16384 ram1
    1 2 16384 ram2
    1 3 16384 ram3
    1 4 16384 ram4
    1 5 16384 ram5
    1 6 16384 ram6
    1 7 16384 ram7
    1 8 16384 ram8
    1 9 16384 ram9
    1 10 16384 ram10
    1 11 16384 ram11
    1 12 16384 ram12
    1 13 16384 ram13
    1 14 16384 ram14
    1 15 16384 ram15
    8 16 21485322240 pcs
    8 0 21485322240 sda
  259 0 976762584 nvme0n1
  259 1 524288 nvme0n1p1
  259 2 32768 nvme0n1p2
  259 3 1048576 nvme0n1p3
  259 4 975154904 nvme0n1p4

[root@wrouter ~]# cat /proc/mdstat
Personalities :
unused devices: <none>

With my previous motherboard, KONTRON KTQM87, there was no such problem, it immediately saw the raid0 volume.

What else should be set?

There has been a very large re-write of the code for the ExtraHD page.

A bug was raised on the ExtraHD change and various fixes implemented in CU179 and CU180.

Not all highlighted bugs were resolved in the above two Core Updates.
Maybe there is still a bug in the situation you are highlighting.

Please add your input into the bug report.

https://bugzilla.ipfire.org/show_bug.cgi?id=12863

I installed 178 as a test, but the system still doesn’t see the raid array. Not only does EXTRAHD not see it, but the system also only sees 2 HHDs

if I go to one of the drives with the command fdisk /dev/sda or fdisk /dev/sdb, it says this:

The device contains ‘isw_raid_member’ signature and it will be removed by a write command. See fdisk(8) man page and --wipe option for more details.

As you are making the raid array via the motherboard BIOS it is likely that the specifics of the raid array from the new motherboard are different from those on the previous motherboard.

Hardware raid via the bios is different for each motherboard and you can’t take a raid array from one motherboard and have it easily accessed by another.
EDIT: Just realised that BIOS based raid is also software based and not hardware based but the software is specific to each motherboard manufacturer, hence the problems trying to get BIOS raid drives recognised and accessed by another manufacturers BIOS raid software.

Maybe that is where the issue is with IPFire.
EDIT: The question might be whether the linux system has drivers for the specific bios raid software on the new motherboard.
If IPFire saw the raid array on your old motherboard then maybe that manufacturer got its driver added into the kernel drivers.

All my raid array testing with ExtraHD has been with raid arrays that were software raid based on mdadm and those were properly seen with the new ExtraHD cgi code.

To add a comment into the IPFire bugzilla you have to login but your IPFire People email and password credentials will allow you to login.

1 Like

That message is saying that the Intel Storage Matrix Raid signature has been found, which is the BIOS raid software approach used on your motherboard.

Reading about ISW it indicates that dmraid is the linux software package that would be able to recognise the ISW structured raid system. The dmraid software package is not on IPFire.

mdadm is the linux raid system software on IPFire. Searching on the internet apparently some versions of mdadm have been able to read ISW based raid systems but not all of them and not with newer versions of mdadm, which is the one running on IPFire for some time.

So the above gives an idea why your BIOS raid based on ISW is not recognised by IPFire.

What I don’t understand is why the BIOS raid on your previous motherboard was recognised by IPFire. What Core Update of IPFire were you running on that previous KONTRON motherboard?

1 Like

It was version IPFire 2.25 – Core Update 156, the Kontron KTQM87 mitx board worked for sure.

If I understand correctly, should I put the HDDs back in AHCI mode and create a RAID0 array with mdad?

Okay, that probably explains why.

Core Update 156 is 2.5 years old with a lot of core updates in between. You really should look at updating more frequently, especially for a security device like a firewall. There have been quite a few CVE vulnerabilities identified in that time period covering packages like the kernel, openssl, openssh, curl etc all of which you will not have had protection from.

That time period also includes an update in mdadm from version 4.1 to 4.2 and that update includes around 2 years worth of development and bugfixes and in that collection there were enhancements and bugfixes for IMSM Raid which is Intel Matrix Storage Manager.
Likely that something in that update no longer works with the version of IMSM Raid coming from your new motherboard. Also likely that if you installed Core Update 180 on your KONTRON motherboard you would also have had the same problem.

I would suggest that if you can start afresh with the two raid drives (ie not critical data stored on them) then your best bet is to configure your raid array on the two drives using mdadm on the IPFire machine. I am presuming that the two drives are not intended for the IPFire operating system but as Extra Hard Drives.

https://fireinfo.ipfire.org/profile/957ad3959914e72061592c579918d9fb359171ba

Maybe it’s a lack of a driver…

The raid runs perfectly on the Kontron motherboard with the latest update. I just tested it quickly.

I have checked through the kernel modules code in IPFire and there is a line

CONFIG_INTEL_RST=m

So the Intel RST driver module is installed but it is not loaded. That is normal as most users won’t require it.

I believe that normally if it is needed it should be automatically loaded but maybe on your new motherboard it is not correctly flagging that it needs the Intel RST driver.

You could try lsmod | grep rst

which will show if any rst module is loaded. It will likely come back with no output showing no rst module loaded.

Then run modprobe intel_rst followed again by lsmod | grep rst

and you should see something like

lsmod | grep rst
intel_rst              16384  0

which I did on my system and which shows that the intel_rst module is now loaded.

You can then check if you can see the raid drive now.

If yes then you will need to add the modprobe command into rc.local to run and load the module when you start up your IPFire as obviously the motherboard is not triggering the loading of the module.
https://wiki.ipfire.org/pkgs/rc-local
This will load the module each time you boot your IPFire. Once loaded it will stay loaded until IPFire is shutdown or rebooted.

1 Like

I tried, it didn’t help anything.

I did the RAID with mdadm.

What might be a “bug” is that I cannot mount md127 in the EXTRAHD menu. It says “You cannot mount md127”. If I enter it manually in fstab, everything is fine, it mounts immediately at startup.

Then it is likely that the asrock motherboard is using a different driver that might not be in the kernel.

Yes, i would raise that in the bug link i provided in an earlier post.

I just created a raid1 array from two drives on my vm testbed.

The two drives had a single partition created with a gpt table. The partition was labelled Linux RAID.

The two drives were /dev/sda1 and /dev/sdb1

I then ran the following command to create a raid array

mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md/MyRAID1Array /dev/sda1 /dev/sdb1

I then formatted the raid array with ext4

mkfs.ext4 /dev/md/MyRAID1Array

On the ExtraHD page I could see the raid array drive and I was able to mount it on /mnt/data3

I then rebooted and the raid array was mounted after restarting.

So I can’t duplicate the problem you are experiencing.
That is why I have detailed the steps I took to create the raid array in case there is something different in the way you have created it.

Screenshot_2023-10-16_10-18-16

The reason my raid array shows as md126 is that on my vm system I have the IPFire OS installed on a raid system so that is already using the md127 label.

1 Like

Hi!

My raid:

parted /dev/sda mklabel gpt
parted /dev/sda mkpart primary ext4 0% 100%
parted /dev/sda set 1 raid on
parted /dev/sdb mklabel gpt
parted /dev/sdb mkpart primary ext4 0% 100%
parted /dev/sdb set 1 raid on
mdadm --create --verbose /dev/md0 --raid-devices=2 --level=0 /dev/sda1 /dev/sdb1
mkfs.ext4 /dev/md127

and now:

[root@wrouter ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 4.0K 16G 1% /dev
tmpfs 16G 12K 16G 1% /dev/shm
tmpfs 16G 872K 16G 1% /run
/dev/nvme0n1p4 915G 2.4G 866G 1% /
/dev/nvme0n1p1 488M 76M 376M 17% /boot
/dev/nvme0n1p2 32M 270K 32M 1% /boot/efi
/var/lock 8.0M 16K 8.0M 1% /var/lock
/dev/md127 40T 166G 38T 1% /var/ADAT

[root@wrouter ~]# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sat Oct 14 19:53:21 2023
Raid Level : raid0
Array Size : 42970376192 (40.02 TiB 44.00 TB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

   Update Time : Sat Oct 14 19:53:21 2023
         State : clean
Active Devices : 2

Working Devices : 2
Failed Devices : 0
Spare Devices : 0

        Layout : -unknown-
    Chunk Size : 512K

Consistency Policy : none

          Name : wrouter.srv:0  (local to host wrouter.srv)
          UUID : 473141d6:060116f2:398aa4cf:291e7619
        Events : 0

Number   Major   Minor   RaidDevice State
   0       8        1        0      active sync   /dev/sda1
   1       8       17        1      active sync   /dev/sdb1

When I tried to mount md127 with EXTRAHD, it said “I cannot mount md127”. But I mounted it successfully by entering it in fstab.

From this I am presuming that you are trying to mount the raid drive to /var/ADAT

In the updated ExtraHD cgi code the only places for the drives to be mounted in are with the directory trees of

/mnt
/media
/data

When you had the message about not being able to mount the drive it should have also included the phrase

because it is outside the allowed mount path

but this looks like it is not occurring due to a typo error in the cgi code.

All right. I understand what’s wrong then.

Previously, I could attach it anywhere without any problems. I didn’t know that had changed either.

But ExtraHD only wrote “You cannot mount md127”…

I attach it to the /var/DATA/ location out of habit… :smiley:

Yes, it should have said that the mount path was not allowed but the typo meant that section was never printed because the entry could not be matched in the language files.

I think it is limited because of permissions aspects to be able to consistently work and because in the past it has not been unknown for users to try /dev or /proc

Please confirm that it works again for you if you use one of the alternative mount points.

A patch to fix that language typo has been submitted into the dev mailing list.

Other than the very expensive server raid controllers, motherboard bios based raid is garbage, todays higher speed processors will see no performance loss with software raid under Linux and is prefered and movable from machine to machine. Best to forget the Bios raid and do software raid although think if can on IPfire would be command line options have to look. Other option is to get a known add on raid controller than can be moved, and can be replaced by same model so raid is recoverable.

1 Like