Could someone else who is using QoS please provide another data point?
Please run ip a | grep qdisc
in a shell on your IPFire box.
You should get output like this.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
2: red0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
3: orange0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
4: blue0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
5: green0p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master green0 state UP group default qlen 1000
6: green0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
7: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc fq_codel state DOWN group default qlen 32
8: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc fq_codel state DOWN group default qlen 32
10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master green0 state UNKNOWN group default qlen 1000
22: imq0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN group default qlen 32
(I have a virtual machine on GREEN)
We’re interesting in what comes after qdisc especially for the imq0 and red0 adapters.
this is what I see with green/blue/orange/red zones all active. And QoS enabled.
[root@ipfire ~]# ip a | grep qdisc
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
2: green0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
3: blue0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
4: orange0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
5: red0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
6: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc fq_codel state DOWN group default qlen 32
7: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc fq_codel state DOWN group default qlen 32
8: imq0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN group default qlen 32
9: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UNKNOWN group default qlen 500
[root@ipfire ~]#
[root@NF-WKIT-01 ~]# ip a | grep qdisc
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
2: red0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
3: green0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
4: blue0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
6: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc fq_codel state DOWN group default qlen 32
7: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc fq_codel state DOWN group default qlen 32
9: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UNKNOWN group default qlen 1000
10: imq0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN group default qlen 32
I think I have your problem and it is killing my IPSec link from my observations (halving the throughput)
Well rather than explaining how it works @ms closed the bug with brutal bluntness.
Michael, why didn’t you respond to this thread before it got that far please?
I can see you have been responding to other threads here in the last few days.
I did say earlier that I didn’t understand how qdiscs are set as there appears to be layers.
If there were some design documents I would have read them but all we have is the wiki page which I was a major author of, before it was moved to the current wiki (which only credits the last person to touch a page).
Regardless I think the bug because of the way I reported it. There’s still an error being reported here and inconsistent results for different users. Perhaps I reported it incorrectly, but there’s still a problem.
I started this thread as I thought I’d come across a problem and was hoping to get some help. I’d already searched the internet trying to find errors of this type and double checked my QoS configuration, restored a backup and checked the wiki but had run out of ideas.
Aside from Jon, nobody responded for a month.
Eventually I did my best in my free time to troubleshoot the problem. Other people responded and we noticed inconsistent configuration.
I was eventually encouraged to raise a bug which I did.
I feel that you closed the bug without explaining why it was not a bug. Saying that I “lack basic understanding of how QoS works” is true (I admitted that in this thread days ago) but is also unhelpful. I’m trying not to take it personally, but please consider that being that blunt may make your work a little quicker but discourages contributions.
Why do different people have different results for the qdisc displayed for red0? Shouldn’t that be the same for all users?
What could the error I reported indicate? If it is not a bug can you please give me a tip as to where my system is misconfigured?
Could you please provide a link or briefly explain how QoS is configured here so that I and future readers can learn and avoid creating incorrect bugs?
I did search the internet extensively when troubleshooting this problem but obviously I’ve not been searching for the right things.
To be clear my graphs stopped working after restarting QoS, however it may have been when I broke configuration, so that is likely my fault.
To round up this discussion.
It isn’t enough to look at the qdiscs for the interfaces.
QOS differentiates into various classes. All of this classes for imq0 and red0 have a qdisc of fq_codel, in my system.
This is correct. The traffic runs though the qdiscs for the classes with fq_codel so the main interface disk should not have a second codel instance in a row. It is correct with htb.
I’m glad that this topic has had responses from people with deeper knowledge.
I’d love to learn more about this, but so far no one has addressed my questions nor helped me resolve the problem I reported at the beginning of this thread.
Can you please help me troubleshoot the error I posted at the top? It still happens when QoS is restarted right now with Core update 157.
How can I determine what qdisc is actually in use for my RED uplink and downlink please?
Why is the qdisc shown in the output of ip a different for different people (in this thread) even though they all mention having similar QoS configuration?
Is the /lib/udev/enable_codel script working as intended? It always produces an error when run in isolation (for example when restarting QoS) due to the argument ‘add’ being used with the tc utility instead of ‘replace’.
Which problem? That error message is not a problem.
There is a new device being created and it is configured in a different way (with the QoS) and so it does not need fq_codel.
Literally every “ip” command shows it. You can get the full output by running “qosctrl status”.
Because having QoS on or off makes a difference.
Yes, it does what it is supposed to do. It is not intended to replace the qdisc.
Nobody will write you documentation on how this works. There is tons available on the Internet which you just need to search for. If you have any questions specific to IPFire, feel free to ask. But you cannot expect other people don’t your work for you.
I’ll respond to you message that when I have a chance, but I want to say that I don’t see why you need to insult someone seeking to learn more?
I never said I wanted you to explain it all for me, but you have implied that it was easy to understand.
I just asked that you please link some relevant documentation somewhere so that myself and others can learn. I’ve gone looking but apparently came to the wrong understanding.
I have responded loads of loads of times to your questions. There is nothing insulting in there.
There is not a single document you can read. It requires understanding about what a qdisc is, what all the different algorithms do. Depending on your level of knowledge you already have about the internals of the networking stack of the Linux kernel, there are probably different pages useful for you.
Thank you for responding twice in discussion here and briefly in bugzilla.
I appreciate efficiency but find that your responses don’t answer my questions in full and come across as being condescending. For example, I’m sure that you are right when you say;
but I’m curious as to where return code 2 is from. Is it unreasonable for me to want to understand with more detail about how QoS is configured in IPFire? (It doesn’t mean that I’m questioning the truth of your word.)
Sorry I think I have not been clear. I don’t want to know how the algorithm works at a mathematical/queue theory level. I want to be able to confirm that it is working in Linux myself. A basic comparison here showed that different users have different results with IPFire.
How can I determine what qdisc is actually in use for my RED uplink and downlink please?
Your response said that “literally every ip command shows it” (you didn’t need to say “literally” as that comes across as an insulting comment).
Did you see the history of this thread please? I showed that every ip command I run shows a qdisc of htb for red0 and imq0, with fq_codel for other interfaces. This differs from the output of other users in the thread, all who claimed to have QoS enabled/on.
It came across to me that you dismissed this saying that not all users had QoS on and it was that simple. Can that explain the variety of differences seen here please?
Thanks for suggesting qoscontrol status. It shows complete detail, breaking down all my QoS rules. It includes including multiple entries for htb as well as some usingfq_codel and it’s at this level that I’d really like to understand why both are present and how they inter-relate.
At a guess it appears that the htb classes may apply basic bandwidth limits only while fq_codel is used for individual rules.
Could someone with working QoS (on/enabled) and bandwith limits set please post a (redacted) version of their output from qosctrl status for me so I can compare in an effort to improve my understanding?
img0 is a ifb pseudo interface because tc qdiscs can only handle outgoing traffic. To filter incoming traffic we redirect it to imq and use its egress qdisk.
The return code come from the linux kernel because we have added a udev rule that try to set fq_codel to all new interfaces but ifb doesn’t have a needed hook (ingress) so codel cannot added.
This is script is for the default config without QoS.
The default qdisk on red is fq_codel without QoS but if you enable QoS it will switched to HTB on the interfaces and use fq_codel only in the subclasses. The script that do this is generated based on your config. Check /vat/ipfire/qos/bin/makeqosscrips.pl