Filesystem runs full

Hi,

on the start page of the admin interface it shows me that my sda3 is filling up.

[root@ipfire ~]# du / -d1 -h | sort -r -h
du: cannot access '/proc/14285/task/14350/fdinfo/135': No such file or directory
du: cannot access '/proc/17300/task/17300/fd/4': No such file or directory
du: cannot access '/proc/17300/task/17300/fdinfo/4': No such file or directory
du: cannot access '/proc/17300/fd/3': No such file or directory
du: cannot access '/proc/17300/fdinfo/3': No such file or directory
4.8G    /
3.1G    /var
856M    /lib
793M    /usr
36M     /boot
20M     /etc
15M     /root
11M     /sbin
6.0M    /bin
4.7M    /srv
3.1M    /.gnupg
2.5M    /opt
520K    /run
160K    /home
16K     /media
16K     /lost+found
16K     /dev
4.0K    /tmp
4.0K    /mnt
0       /sys
0       /proc

My System: fireinfo.ipfire.org - Profile b1561cbb77f771fcbe1e3698fe7edd0ed212ac4a

I have already deleted all backup files (except the last one) and the old logfiles in /var/log.

Unfortunately without any improvement.

Do you have any tips on how I could clean up my disk space?

Greetings
Johnny

sda3 on my IPFire is setup as the swap partition and it really should be empty.

Your root partition has 4.8G used and IPFire normally makes this partition as big as it is able to and your hard disk is shown on your fireinfo as being 15GB so I am not sure that is your problem.

What do you get if you run lsblk from the console. This will show which partition sda3 is assigned to.

On your status/media wui page you can also see the state of all the partitions except swap. The table shows the size and how full it is.
If sda3 is not swap on your system what does it show on the status/media page for which mountpoint is assigned to sda3 and what its max size is.

lsblk says:

[root@ipfire ~]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0  14.9G  0 disk
├─sda4   8:4    0  11.9G  0 part /var
├─sda2   8:2    0 982.9M  0 part [SWAP]
├─sda3   8:3    0     2G  0 part /
└─sda1   8:1    0    64M  0 part /boot

Do you mean this overview?:

Okay, so you have a root partition and a var partition.

The separate root and var partitions were stopped in 2018 and made into a single root partition for the OS and the log files etc.

Your sda3 partition is not being affected by the log files as they are stored in sda4 and still have 10G spare. The problem is your root partition that contains the OS. It is only 2G large and as the OS has developed and more things have ended up in the core then this partition size is getting too small.

I think the easiest way to deal with this is to re-install IPFire after having made a backup and saved it to a separate machine.
As well as giving you a single root partition that includes var, it will also give you a larger boot partition. Yours is 59M and 66% full. These days the boot partition is made 110M.

Before doing the re-install I would recommend finding and noting down the mac and ip addresses for each network adaptor so you know which one to assign to red, green and orange when you re-install.
Maybe also read back through the installation section of the wiki beforehand to remind yourself of what you will meet when you do the re-install.

While it may not be something you were expecting to need to do, I would say that a re-install is not that difficult.
Key thing is the preparation beforehand.
It is not something you have to do immediately as all your logs and backups etc and any settings storage are all on the much larger /var partition. The problem will come in the future with the core updates when the OS size gets larger and larger.

2 Likes

Ufffff… :frowning:
Is there no other way to clean the /?

The problem is that the root in your setup is just the OS. It does not contain any logs or backups or settings files. Those are all in the /var partition.

EDIT:
Your sda3 2G partition has 856M in /lib and 793M in /usr. Neither of those directories contain anything that you can delete.
Sorry to be the bearer of bad news.

It is not something you have to do right now as the OS itself won’t get any bigger between core updates but the likelihood is that each core update will have the core OS getting a little larger, especially when new core capabilities are installed.

2 Likes

at least untill the next coreupdate…

2 Likes

I went through the same thing. Re-installing is easier then it sounds. (just make sure your backup is downloaded to a separate computer!)

4 Likes

Hello,
I DO NOT RECOMMEND THIS - TRY IT ON YOUR OWN RISK
I have increased my / partition from APU2D4 by stopping swap and using the swap partition

  1. Turn off swap
swapoff -a
  1. edit fstab and mtab → comment the swap entries.
  2. Run
fdisk -l

and save the output to a file, copy that file on your laptop/desktop (!!!) → identify the partition number for / (my system used sda3)
4. build Tinycore flash disk and boot the box from it: TinyCore Linux USB installer (pcengines.ch)

  1. Boot APU from the flash usb with Tiny core and DiskDump the partition identified on point 3 to a file on partition 4 from same internal APU disk (that is faster in case you use in APU and SSD - like I do)

Ex:

umount /dev/sda3
cd /mnt/sda4 #sda4 is the /var partition from IPFIRE
dd if=/dev/sda3 bs=4096 conv=notrunc,noerror,sync of=sda3_mSATA_slash_APU2.img
  1. Use fdisk pointed to the internal disk from APU and erase the partition that coresponding to / on point 3 (sda3)

ex:

fdisk /dev/sda
  1. Show disk partition - there should be a big unused amount of sectors between partition 1 and partition 4. Write down the end of first partition and start of 4th partition
    Then erase partition used by / in Ipfire (my case was number 3)

  2. Create a new Primary partition with SAME NUMBER as the one erased above and use as start sector the end of first partition +1 sector. Fdisk will suggest as end for this partition the previous sector in front of partition 4 (check that with numbers from point above)
    Write the new partition table

  3. Dump back on the new partition the content saved on point 5.

Ex:

dd  if=sda3_mSATA_slash_APU2.img of=/dev/sda3 bs=4096 conv=notrunc,noerror,sync
  1. Mount the partition and check that is readable

Ex:

mound /dev/sda3 /mnt/sda3
ls -l /mnt/sda3 
  1. reboot to IPFIRE and then write /.partresize file
    This file will tell IPFIRE to extend the partition to its maximum available size
touch /.partresize
  1. Reboot and watch how IPFIRE does the resize.
    It accomplish that by using /etc/rc.d/init.d/partresize which the developers included in Start (boot) sequence /etc/rc.d/rcsysinit.d/S25partresize
    That file will initiate another reboot - be patient - and only at second reboot the resize takes place

here is the output during second reboot:

Mounting remaining file systems...                                     [  OK  ]
Activating all swap files/partitions...                                [  OK  ]
Re-sizing root partition...
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/sda3 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/sda3 is now 775913 (4k) blocks long. 

Done

My IPFIRE sda3 partition prior to these was 93% used and I had warnings in GUI:

Disk Usage

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G  4.0K  1.9G   1% /dev
tmpfs           2.0G   12K  2.0G   1% /dev/shm
tmpfs           2.0G  624K  2.0G   1% /run
/dev/sda3       2.0G  1.7G  144M  93% / -> I was geting warnings in WUI for only 7% free space left
/dev/sda1        59M   35M   21M  63% /boot
/dev/sda4        12G  1.5G  9.6G  14% /var
/var/lock       8.0M   16K  8.0M   1% /var/lock
none            2.0G  504K  2.0G   1% /var/log/vnstat
none            2.0G   48M  1.9G   3% /var/log/rrd 

Fdisk (saved at point 3): notice a space between sda1 and sda3 → I have erased the sda2 that was swap

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sda1  *       2048   133119   131072   64M 83 Linux
/dev/sda3       2146120  6340423  4194304    2G 83 Linux
/dev/sda4       6340424 31275183 24934760 11.9G 83 Linux

After above process

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G  4.0K  1.9G   1% /dev
tmpfs           2.0G   12K  2.0G   1% /dev/shm
tmpfs           2.0G  616K  2.0G   1% /run
/dev/sda3       2.9G  1.7G  1.1G  62% /  -> This is much better!
/dev/sda1        59M   35M   21M  63% /boot
/dev/sda4        12G  3.6G  7.5G  33% /var
/var/lock       8.0M   16K  8.0M   1% /var/lock
none            2.0G  504K  2.0G   1% /var/log/vnstat
none            2.0G   48M  1.9G   3% /var/log/rrd

New partition table: notice that sda3 is starting immediately after sda1 and expands up to sda4 - 3GB now

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sda1  *       2048   133119   131072   64M 83 Linux
/dev/sda3        133120  6340423  6207304    3G 83 Linux
/dev/sda4       6340424 31275183 24934760 11.9G 83 Linux
1 Like

Looking at the steps listed to be done and checked, I would find it quicker to do a backup, including logs, and then to do a new install followed by a restore of the backup.

2 Likes

Good recipe to regain space with the outdated partition scheme.
New installation do not have an own /var partition, see

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        971M  4.0K  971M   1% /dev
tmpfs           983M   12K  983M   1% /dev/shm
tmpfs           983M  572K  983M   1% /run
/dev/sda4        14G  6.1G  7.2G  46% /
/dev/sda1       110M   39M   63M  39% /boot
/dev/sda2        32M  270K   32M   1% /boot/efi
/var/lock       8.0M   12K  8.0M   1% /var/lock
none            983M   42M  942M   5% /var/log/rrd
none            983M  436K  983M   1% /var/log/vnstat

Maybe a reinstall with an actual CU gives a better result.

My machine(s) has many custom elements including how WUI is launched (I use my ow certificates from my own CA and startup check that cert is valid or generate a new one), OpenVPN is also using own CA, Suricata uses own script that combine ET with Talos sources, DNS includes Pi-Hole scripts…etc…

It is much easier as both time and effort to reuse swap …

So do not really have an effective backup for your system config?
How will you proceed in case of system loss?

1 Like

ALTERNATIVE:

  • Boot from stand-alone (e.g.) “SystemRescueCD”
  • shrink sda4=/var from the low end
  • extend sda3=/ on the upper end

AFTERWARDS,
take heed of Bernhard’s warning above and

  • create yourself a system image ! , e.g. via CloneZilla.
2 Likes

I usually plug the backup/testing APU1 router (is a bit old, only 2 core, and needs 10 minutes to load all the things), or use a spare SSD and restore the disk dump from another similar system and still use the APU2 hardware

Cloning SSD disks with dd is so fast (2GB was done in less than 1 minute), plus it contains all the custom scripts, the latest IPS signatures, the latest DNS black list (that is generated in 16 minutes because it contains so many sources that need to be deduplicated and sorted), latest CRL (otherwise VPN does not start), latest location database (sorting it does consume time)… so many elements that are needed for protection.

After adding all the time needed to update all the daemons with latest information I ended up increasing partitions instead of doing install + restore + adding my own scripts and run them to get the data system needs to filter the traffic.

And I already have so many APU machines laying around, some running IPFire already, so I am never forced to start a machine from scratch - I have always 1-2 spare machines that is at exact same version as primary

So I have quite a few sources from where I can clone a disk…and those are with 4 partitions (old style)…

Cloning is the solution for me, and this also gives me full history of that system (graphs, logs rotated - by the way, some IPS logs are never rotated so I also edited logrotate conf for IPS, etc)…

After 12 years using IPFire my system are anything but “stock” and cloning them gives the best time and quality in terms of restoring a dead disk.

ipFire sda3 / high % of use.
I updated to 163 last week and found sda3 / 92% full.
After investigation it became obvious that this system (I have 4 running concurrently) was originally built some 5 years ago. The other 3 systems were only 30% - 40% on sda3 /.
A clean rebuild of the ipfire with 163 solved the problem and importing a backup reinstated all the important bits. All running again with 30 minutes.
GaryNZ

3 Likes

A post was split to a new topic: Full filesystem