Good afternoon everyone. I’m currently setting up a class to work on a remote desktop. With the current settings, there is a problem in which the image on the remote table begins to freeze for 2-3 seconds during the entire period of reaching the limit of the allocated channel width for the default class, which is used for incoming traffic.
That is, part of the traffic that should be defined under the RDP connection class goes to the default class, where the priority is lower and packets are discarded when the limit is reached.
I just can’t figure out what other rules are missing for the RDP class. So that all this traffic falls under the rules of my class 203.
To connect I use standard port 3389. To analyze traffic I use Wireshark. Based on the information from it, all traffic goes through port 3389.
I would like to be able to see exactly how this or that traffic was recognized and sent to the default class. What ports and protocols were there?
However, I did not find such an opportunity from Ipfire.
In /var/log/rrd/ the information is similar to that in the graph without the detail I need.
If you try to sort messages by interface grep -i imq0 /var/log/messages , then I see only the following information at the time of QoS reboot:
Jun 18 14:37:19 gw root: Could not find a bridged zone for imq0 Jun 18 14:37:20 gw vnstatd[3562]: Traffic rate for “imq0” higher than set maximum 1000 Mbit (20s->2673868800, r18446744065993504072 t18446744065993504072, 64bit:0), syncing.
Perhaps I missed where I can enable more detailed logging for this interface?
seems to me that you’re giving to both VPN and remote control application… the same lane, which need to be shared among all that protocols.
If ssh, vhc, rdp will be mostly used via VPN the protocols are competing with each other: the inner one for the application, and the outer one for the transport; in my mind, this could lead the situation that if a file transfer happens via ssh, all the VPN traffic will be slowed down beyond the necessary due to the ssh eating up the allocated bandwidth for IPSec or OpenVPN.
I’d suggest you to create a separate queue for your remote connection prodocols, with a slighly less bandwidth than one allocated for VPN protocols.
Hello, Pike.
Thank you for your recommendation. I will definitely divide this class into two.
However, how can this help in solving the described problem? After all, in my case, the channel width limit of class 203 is not reached.
The problem begins to manifest itself only when the default channel width overflows (210 on the graph).
I divided all the services I have in QoS into separate classes. After watching the result, I noticed that the “RDP” class does not recognize rpd traffic at all.
Now I’m asking the “dumb” question: how do QoS rules work?
1: is the application/port specified on the destination for call (Eg in this case 3389) or on the source? (i hope for the first)
2: on “answer” traffic… there should be a “corresponding” rule “reverted”?
Good afternoon, everyone. So, the current situation:
The routing settings to the RDP host have been changed. It is now accessible via a separate OpenVPN Net-to-Net channel to another gateway without QoS.
The logic is that QoS should not work on this interface (tun1), in theory, because it only works on red0 and imq0. However, part of the total channel width will be occupied. I subtracted it from the total channel width for QoS with a small margin. To ensure a guaranteed channel width for the traffic that will go through this interface - 15 Mb out of a total value of 50 Mb.
2.1. Separately, I also wanted to change the speed on the interface from Speed: 10000Mb/s to Speed: 15Mb/s, but I did not find the interface configuration file. That’s why I didn’t. There was also a variation to do this through the tc utility, but first I would like to clarify if you have some even more suitable solution for limiting the interface speed?
Changed the class settings for the new channel width - 35Mb, taking into account 3% per buffer. As a result, the default class speed was also greatly reduced to 5-10%.
When testing such a work model, I quickly overflowed the default class because not many resources were allocated to it and the previously observed problems on the RDP host manifested themselves again.
As a temporary solution, I increased the channel width for default classes again to 25-50% and again everything works correctly until this lane is completely occupied by some p2p traffic, for example.
It seems that I was wrong and this cannot be a solution for me, despite the fact that traffic is running properly through the tunnel.