RAID volume directory disappeared

Help a very important directory just away, what can I do?

A bit more information is needed.

Where is the raid volume supposed to be, on IPFire’s disk or on a separately mounted location.

In no particular order:-
What do you mean by disappeared
What disks can be seen with lsblk command
What does df -hl command show
What type of raid setup is it. Raid 0, 1, 5 …
What does cat /proc/mdstat show about the raid status
What actions did you take just before it disappeared

A little more information is needed.

If a RAID volume is 1 (two HDDs each) all are attached to the IPFire PC. Or on a raid controller: HP Smart Arry P410i.
Said HDD is / dev / sdb1, see below.

Have not taken any measures so far.
Disappeared simply means gone, as if by magic alla David Copperfield.

below all three commands as snippets of code

#lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb      8:16   0 931.5G  0 disk 
└─sdb1   8:17   0 931.5G  0 part /mnt/2u6
sr0     11:0    1  1024M  0 rom  
sdc      8:32   0 931.5G  0 disk 
└─sdc1   8:33   0 931.5G  0 part /mnt/3u7
sda      8:0    0 279.4G  0 disk 
├─sda4   8:4    0 228.9G  0 part /var
├─sda2   8:2    0     1G  0 part [SWAP]
├─sda3   8:3    0  49.5G  0 part /
└─sda1   8:1    0    64M  0 part /boot

# df -hl
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         16G  4.0K   16G   1% /dev
shm             256M   12K  256M   1% /dev/shm
tmpfs            16G  1.2M   16G   1% /run
/dev/sda3        49G  2.8G   44G   6% /
/dev/sda1        59M   31M   24M  57% /boot
/dev/sda4       226G  172G   43G  81% /var
/dev/sdb1       932G  778G  154G  84% /mnt/2u6
/dev/sdc1       932G  597G  335G  65% /mnt/3u7
/var/lock       8.0M   12K  8.0M   1% /var/lock

 # cat /proc/mdstat 
Personalities : 
unused devices: <none>

You are using a separate raid controller, the HP Smart Array P410i. So mdstat is not relevant as that is for software raid.

It looks like the disks for your raid are still present (I suspect /mnt/2u6 and /mnt/3u7) but you might have a raid controller failure if the raid disk is no longer being shown as present…

There must be some diagnostic software that came with the raid controller but your best bet is to contact HP for help on how to troubleshoot your problem.

I can’t help any further as all of my experience is with software raid (aka mdadm). I have no experience with hardware raid.

I have a similar problem, possibly the same problem!

I had an active MDADM softraid that worked until a reboot today.

After reboot, [for new NFS shares configuration] the /dev/md127 no longer shows a UUID, the FSTAB points to a single drive in the RAID1 array and none of my files are available.

The STATUS tab for MEDIA shows the MD127 device active [raid device created by MDADM during original setup], but directory mapped to same drive is empty.

No MDADM addon is shown under IPFIRE paks, or as options for paks to install.

No MDADM.CONF exists on the system using MC search from root “/”

I’m on CORE 154 on an i3 with 8gb Ram, 120gb SSD boot drive, and 2-8TB drives in mixed brand raid.

Hi @onnareal

Welcome to the IPFire Community

Was the reboot related to upgrading to Core 154 or had you been successfully running Core 154 with your raid array till this reboot?

mdadm is no longer an addon. It is a part of the core programs since 2014. Looking at the Wiki, this needs to be updated there as it still talks about it as an addon.

mdadm.conf should be in /etc and would have been created in there by yourself when you initially set up your raid configuration. If it is missing now then something must have deleted it.

What does

cat /proc/mdstat

show. This will tell us the status of your raid array.

Personalities :
md127 : inactive sdb1
7813894488 blocks super 1.2

unused devices:

Which makes some sense, but the device /dev/sdb1 shows as ready with an active UUID.
/dev/sdc1 is shown as active on the directory where /dev/md127 should be.

Prior checks of EXTRAHD would reflect UUID for the raid device /dev/md127

I set an NFS export from the same directory and lost access.

I searched for the MDADM.CONF when I couldn’t find the MDADM add-on in the active IPFIRE paks, which I found odd after a reboot. It’s not listed as an available PAK either, though RSYNC is still listed [and was installed as I set up the raid initially].

I was curious why the file went missing as well. I’ve been using the RAID as NAS storage for months since the initial configuration, was changing the NAS to NFS from SAMBA for faster access times.

The inactive here means that the raid array is inactive. Usually an inactive array means that there is a fault somewhere and you only have one drive showing in the list. The second drive has been stopped and removed from the array. Likely that drive has had a bad fault and been completely dropped out of the array.

The remaining drive would still then operate so the raid would still provide the data. However, there is no longer any redundancy so a fault on the remaining drive would then cause the Raid drive to no longer work.

To see if that actually is the case, some more data is required.

Run a smartctl health check on the two drives from the raid array. That is done on the sdb1 and sdc1 parts and can be carried out whether the drive is actively part of the raid array or not.

Show the result from the lsblk command. That will show what drives are mounted to where.

The next step(s) will depend on what results come back from the above commands. If the drives are healthy this will involve mounting each drive to a created mountpoint and listing the contents to see if they are still present or not.
If both drives show up healthy and have their contents still present then basically you will need to rebuild the array. If either of the drives show up as dead then you will need to get a new drive and add it to the existing raid array. If both drives show up as faulty then you will need to start again.
If you haven’t got a backup then hopefully one of the drives is still working and can be mounted and a backup made to another drive.

Please note my raid systems are on my server and desktop system, not on IPFire but the principles should apply.
The sources for information I use when I have to work on my raid systems are the follwoing:-
https://raid.wiki.kernel.org/index.php/RAID_setup
https://wiki.archlinux.org/index.php/RAID

Feel free to come back with any questions as you get more data and I will give any help I am able to.

Thanks guys for the info.

But the raid volumes work without any problems, I would say. Because the access via the network shares still works. And I can also see the directory structure and even call it up in the file manager. The only thing I criticize or mourn is that some directory (very important for me) has disappeared along with many others that exist on the Raid Hdd / dev / sdb1 level. And just like that and suddenly.

The question was how can I restore / find the lost directory. Test disk or other, but how on the IpFire?

Hi @old_men

Sorry, I thought you meant that your whole raid volume had disappeared.

If some directories are no longer there then likely some command somewhere has deleted them. You could grep the logs for the directory name to see if you can find what command was run.

It sounds like there probably isn’t a backup of the raid volume.

As your raid is a hardware raid you can’t just move the drives to another PC that has the recovery software you want to use.

An option is to get a rescue CD with TestDisk or equivalent on it and use that to recover the directories/files. It depends on how experienced you are with file recovery and how important the files are that have been lost. I don’t know how easy the recovery programs are to use and how likely it is to make things worse.
An alternative would be to pay a company to do the recovery for you.

Beyond the above, I can’t help further. I have no experience of doing file/directory recovery, other than restoring from my backups.

Thank you so much for all the suggestions!

The references alone are invaluable!

IPFire has no mdadm.conf at default. It is not needed in the most cases because the internal defaults are
ok. Raids with all disks present will autostarted other (degraded) will not to protect you from data loss.

So first check the hardware for defective disks. If you find one replace it and readd it to the raid. If you not find any fault you can also try to readd the rejected disk.
After this the raid should work again.

If you want only data recovery you can also start the degraded raid manually but in this mode you should mount it read only.

2 Likes