Adding a single host is a pain

I’ve a fairly long list of host entries in IPFire and the list is still growing.

However, adding a single host to the list and pressing the save button really gets a PITA!
The webpage saves and saves and saves…

I had a quick look into the CGI file and it seems that after pressing the save button, the coding behind, rebuilds the hosts files for each entry in the list and restarts a service (don’t recall which one ATM, unbound?) for each entry in the list again and again.

This results in a timeout in my case because of the amount of hosts I’ve added to ipfire.

Can this behaviour be changed?
E.g. still use the current save or update button if an entry has been modified or added but add an new extra button to finally commit all changes.

This would speed up the editing enormously!

If the commit button would restart the service in question once it finished rebuilding the final hosts file, this would be another speed gain.

3 Likes

When I did it in batch, I added all the hosts to /var/ipfire/main/hosts. Then if you go into the WUI you will see the hosts there but they will not be active. To activate all of them, edit one line and save it. All lines will become active in one hit.

1 Like

That is currently my workaround, too. At least editing the according temporary hosts file in /var/ipfire/main.

However, I’m executing the following commands in a terminal, after editing the hosts file.

/usr/local/bin/rebuildhosts
/usr/local/bin/unboundctrl restart

But yet, the restart of unbound takes ages on CLI, too, for unknown reasons. maybe that’s why the WebIF is although taking that much time?

By any chance, does restarting unbound really take so much time? 6 minutes here!

I just tried it on my vm and main systems.

vm system - 11 hosts in host list - 5 secs
main system - 32 hosts in host list - 10 secs

So nothing like 6 minutes.

How many hosts do you have in your hosts list?

1 Like

After answering Adolf’s question (I am curious also!). . .

If you are using RPZ, you may have enabled too many RPZ lists.

See Notes here: https://www.ipfire.org/docs/addons/rpz#notes

1 Like

Many, currently about 50 to 60 entries!

Sorrz, no RPZ so far :upside_down_face:

With that number, based on my figures, I would expect around 15 to 20 secs max. I took 10 secs with 32 hosts and tripling the number from 11 to 32 only doubled the time .

So no idea what is causing your problem.

The only thing I can think of is to put unbound into debug mode and with increased verbosity to see what is occurring during its startup.

You will need to edit the unbound initscript so make sure to create a backup copy.

Edit /etc/rc.d/init.d/unbound to change line number 577 from

loadproc /usr/sbin/unbound || exit $?
to
loadproc /usr/sbin/unbound -dd -vv || exit $?

Then run unboundctrl restart

The -dd puts unbound into debug mode and with two d’s it sends all the messages to the console (or ssh) terminal screen and nothing to syslog. It also leaves unbound in foreground mode.

On my vm testbed, that completed the restart in 5 secs, I got the following messages

Starting Unbound DNS Proxy...
[1748246886] unbound[5123:0] notice: Start of unbound 1.23.0.
May 26 10:08:06 unbound[5123:0] debug: module config: "validator iterator"
May 26 10:08:06 unbound[5123:0] debug: chdir to /etc/unbound
May 26 10:08:06 unbound[5123:0] debug: drop user privileges, run as nobody
May 26 10:08:06 unbound[5123:0] debug: switching log to stderr
May 26 10:08:06 unbound[5123:0] debug: Forward zone server list:
May 26 10:08:06 unbound[5123:0] info: DelegationPoint<.>: 0 names (0 missing), 5 addrs (0 result, 5 avail) parentNS
May 26 10:08:06 unbound[5123:0] debug: Reading root hints from /etc/unbound/root.hints
May 26 10:08:06 unbound[5123:0] info: DelegationPoint<.>: 13 names (0 missing), 26 addrs (0 result, 26 avail) parentNS
May 26 10:08:06 unbound[5123:0] notice: init module 0: validator
May 26 10:08:06 unbound[5123:0] notice: init module 1: iterator
May 26 10:08:06 unbound[5123:0] debug: target fetch policy for level 0 is 3
May 26 10:08:06 unbound[5123:0] debug: target fetch policy for level 1 is 2
May 26 10:08:06 unbound[5123:0] debug: target fetch policy for level 2 is 1
May 26 10:08:06 unbound[5123:0] debug: target fetch policy for level 3 is 0
May 26 10:08:06 unbound[5123:0] debug: target fetch policy for level 4 is 0
May 26 10:08:06 unbound[5123:0] debug: cache memory msg=66104 rrset=66104 infra=7952 val=66384
May 26 10:08:06 unbound[5123:0] info: start of service (unbound 1.23.0).

At this point it has started the unbound service.

Then press ctrl-c to stop unbound and you will get the following shutdown messages.

May 26 10:09:30 unbound[5123:0] info: service stopped (unbound 1.23.0).
May 26 10:09:30 unbound[5123:0] info: server stats for thread 0: 0 queries, 0 answers from cache, 0 recursions, 0 prefetch, 0 rejected by ip ratelimiting
May 26 10:09:30 unbound[5123:0] info: server stats for thread 0: requestlist max 0 avg 0 exceeded 0 jostled 0
May 26 10:09:30 unbound[5123:0] info: mesh has 0 recursion states (0 with reply, 0 detached), 0 waiting replies, 0 recursion replies sent, 0 replies dropped, 0 states jostled out
May 26 10:09:30 unbound[5123:0] debug: cache memory msg=66104 rrset=66104 infra=7952 val=66384
May 26 10:09:30 unbound[5123:0] debug: switching log to stderr                        [  OK  ]

Then replace the original umbound initscript and run unboundctrl restart so that unbound is started again in background mode.

Maybe in your debug messages there will be some clue as to what it is doing for those 6 minutes.

2 Likes

@bonnietwin, will give this a try later on, today. Thanks!