Configure a class for RDP session connection

Good afternoon everyone. I’m currently setting up a class to work on a remote desktop. With the current settings, there is a problem in which the image on the remote table begins to freeze for 2-3 seconds during the entire period of reaching the limit of the allocated channel width for the default class, which is used for incoming traffic.

That is, part of the traffic that should be defined under the RDP connection class goes to the default class, where the priority is lower and packets are discarded when the limit is reached.

I just can’t figure out what other rules are missing for the RDP class. So that all this traffic falls under the rules of my class 203.

To connect I use standard port 3389. To analyze traffic I use Wireshark. Based on the information from it, all traffic goes through port 3389.

I would like to be able to see exactly how this or that traffic was recognized and sent to the default class. What ports and protocols were there?
However, I did not find such an opportunity from Ipfire.
In /var/log/rrd/ the information is similar to that in the graph without the detail I need.

If you try to sort messages by interface
grep -i imq0 /var/log/messages , then I see only the following information at the time of QoS reboot:

Jun 18 14:37:19 gw root: Could not find a bridged zone for imq0
Jun 18 14:37:20 gw vnstatd[3562]: Traffic rate for “imq0” higher than set maximum 1000 Mbit (20s->2673868800, r18446744065993504072 t18446744065993504072, 64bit:0), syncing.

Perhaps I missed where I can enable more detailed logging for this interface?

level7config:
203;imq0;rdp;;;
203;imq0;ssh;;;
203;imq0;vnc;;;

portconfig:
203;imq0;esp;;;;;
203;imq0;rdp;;;;;
203;imq0;tcp;;1194;;;
203;imq0;tcp;;3389;;;
203;imq0;tcp;;;;1194;
203;imq0;udp;;1194;;;
203;imq0;udp;;3389;;;
203;imq0;udp;;4500;;4500;
203;imq0;udp;;500;;500;
203;imq0;udp;;;;1194;
203;imq0;ipv4;;3389;;;
203;imq0;rdp;;3389;;;

tosconfig:
203;imq0;2;

According to this instruction

seems to me that you’re giving to both VPN and remote control application… the same lane, which need to be shared among all that protocols.
If ssh, vhc, rdp will be mostly used via VPN the protocols are competing with each other: the inner one for the application, and the outer one for the transport; in my mind, this could lead the situation that if a file transfer happens via ssh, all the VPN traffic will be slowed down beyond the necessary due to the ssh eating up the allocated bandwidth for IPSec or OpenVPN.

I’d suggest you to create a separate queue for your remote connection prodocols, with a slighly less bandwidth than one allocated for VPN protocols.

Hello, Pike.
Thank you for your recommendation. I will definitely divide this class into two.

However, how can this help in solving the described problem? After all, in my case, the channel width limit of class 203 is not reached.
The problem begins to manifest itself only when the default channel width overflows (210 on the graph).

Is RDP used only via VPN?

Not only. I also use the connection while on the subnet where the remote desktop is located.

I will update the information on this issue.

I divided all the services I have in QoS into separate classes. After watching the result, I noticed that the “RDP” class does not recognize rpd traffic at all.

I am currently working from a local network, where 192.168.7.182 is a workstation, and 192.168.8.18 is an RDS server.

Output from Wireshark is:

Outgoing packets (random ones):

No. Time Source Destantion Protocol Length Info
410 13.036259 192.168.7.182 192.168.8.18 TCP 66 58549 → 3389 [ACK] Seq=2709 Ack=469 Win=512 Len=0 SLE=432 SRE=469
2296 28.474286 192.168.7.182 192.168.8.18 TCP 54 58549 → 3389 [ACK] Seq=27358 Ack=1562 Win=512 Len=0
2434 30.219017 192.168.7.182 192.168.8.18 TCP 54 58549 → 3389 [ACK] Seq=29641 Ack=1613 Win=512 Len=0

Incoming packages (random ones):

No. Time Source Destantion Protocol Length Info
2104 26.580196 192.168.8.18 192.168.7.182 TCP 66 3389 → 58549 [ACK] Seq=1386 Ack=23198 Win=63407 Len=0 SLE=22755 SRE=22805
2106 26.590496 192.168.8.18 192.168.7.182 TCP 60 3389 → 58549 [ACK] Seq=1386 Ack=23298 Win=63307 Len=0
2129 26.716787 192.168.8.18 192.168.7.182 TCP 66 3389 → 58549 [ACK] Seq=1386 Ack=23691 Win=62914 Len=0 SLE=23398 SRE=23691

Current Class setting:

root@gw ~]# grep 108 /var/ipfire/qos/
/var/ipfire/qos/classes:red0;108;3;2400;12000;;;;RDP;
/var/ipfire/qos/level7config:108;red0;rdp;;;
/var/ipfire/qos/portconfig:108;red0;rdp;;;;;
/var/ipfire/qos/portconfig:108;red0;tcp;;;;3389;
/var/ipfire/qos/portconfig:108;red0;udp;;;;3389;
/var/ipfire/qos/tosconfig:108;red0;2;
root@gw ~]# grep 208 /var/ipfire/qos/
/var/ipfire/qos/classes:imq0;208;2;2400;12000;;;;RDP;
/var/ipfire/qos/level7config:208;imq0;rdp;;;
/var/ipfire/qos/portconfig:208;imq0;rdp;;;;;
/var/ipfire/qos/portconfig:208;imq0;tcp;;3389;;;
/var/ipfire/qos/portconfig:208;imq0;udp;;3389;;;
/var/ipfire/qos/tosconfig:208;imq0;2;

Why do you think this is happening?


EDIT: moderator formatted table

Now I’m asking the “dumb” question: how do QoS rules work?

1: is the application/port specified on the destination for call (Eg in this case 3389) or on the source? (i hope for the first)
2: on “answer” traffic… there should be a “corresponding” rule “reverted”?

Good afternoon, Pike. It’s a logical questions, thanks.

Yes, the destination port for the application is 3389. The server also responds with 3389.

I tried to restart QoS again, and after a while the traffic started.

Keep us posted? :wink:

Sure. I had a few thoughts. I will check them and update the information

Good afternoon, everyone. So, the current situation:

  1. The routing settings to the RDP host have been changed. It is now accessible via a separate OpenVPN Net-to-Net channel to another gateway without QoS.
  2. The logic is that QoS should not work on this interface (tun1), in theory, because it only works on red0 and imq0. However, part of the total channel width will be occupied. I subtracted it from the total channel width for QoS with a small margin. To ensure a guaranteed channel width for the traffic that will go through this interface - 15 Mb out of a total value of 50 Mb.
    2.1. Separately, I also wanted to change the speed on the interface from Speed: 10000Mb/s to Speed: 15Mb/s, but I did not find the interface configuration file. That’s why I didn’t. There was also a variation to do this through the tc utility, but first I would like to clarify if you have some even more suitable solution for limiting the interface speed?
  3. Changed the class settings for the new channel width - 35Mb, taking into account 3% per buffer. As a result, the default class speed was also greatly reduced to 5-10%.
  4. When testing such a work model, I quickly overflowed the default class because not many resources were allocated to it and the previously observed problems on the RDP host manifested themselves again.

As a temporary solution, I increased the channel width for default classes again to 25-50% and again everything works correctly until this lane is completely occupied by some p2p traffic, for example.

It seems that I was wrong and this cannot be a solution for me, despite the fact that traffic is running properly through the tunnel.

[root@gw ~]# iftop -i tun1