HQF: intra-class fair queuing

Continuing from my first excursion into the brave new world of HQF, I wanted to check how well the intra-class fair queuing works. I’ve started with the same testbed and router configurations as before and configured the following policy-map on the WAN interface:

policy-map WAN
class P5001
bandwidth percent 20
fair-queue
class P5003
bandwidth percent 30
class class-default
fair-queue

The test used this background load:

ClassBackground load
P500110 parallel TCP sessions
P50031500 kbps UDP flood
class-default1500 kbps UDP flood

As expected, the bandwidth distribution between the three traffic classes was almost optimal:

a1#show policy-map interface serial 0/1/0 | include map|bps
Class-map: P5001 (match-all)
30 second offered rate 394000 bps, drop rate 0 bps
bandwidth 20% (400 kbps)
Class-map: P5003 (match-all)
30 second offered rate 2073000 bps, drop rate 1479000 bps
bandwidth 30% (600 kbps)
Class-map: class-default (match-any)
30 second offered rate 1780000 bps, drop rate 790000 bps

Next I’ve started a single (iperf) TCP session in the P5003 class. Using traditional CB-WFQ the session wouldn’t even start due to heavy congestion caused by the UDP floods. The same problem occurred with HQF since the P5003 class was using FIFO queuing.

However, once I’ve configured fair-queue within the P5003 class, the TCP session got half the allocated bandwidth:

$ iperf -c 10.0.20.10 -t 3600 -p 5003 -i 60
------------------------------------------------------------
Client connecting to 10.0.20.10, TCP port 5003
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1916] local 10.0.0.10 port 1309 connected with 10.0.20.10 port 5003
[ ID] Interval Transfer Bandwidth
[1916] 0.0-60.0 sec 2.05 MBytes 287 Kbits/sec
[1916] 60.0-120.0 sec 2.05 MBytes 286 Kbits/sec

As expected, if you start numerous parallel TCP sessions, each one will get as much bandwidth as the UDP flooding stream. I started ten parallel TCP sessions with ipref and got an aggregate goodput of 524 kbps (leaving UDP flood with approximately 60 kbps):

$ iperf -c 10.0.20.10 -t 3600 -p 5003 -i 60 -P 10
------------------------------------------------------------
Client connecting to 10.0.20.10, TCP port 5003
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1916] local 10.0.0.10 port 1310 connected with 10.0.20.10 port 5003
[1900] local 10.0.0.10 port 1311 connected with 10.0.20.10 port 5003
[1884] local 10.0.0.10 port 1312 connected with 10.0.20.10 port 5003
[1868] local 10.0.0.10 port 1313 connected with 10.0.20.10 port 5003
[1852] local 10.0.0.10 port 1314 connected with 10.0.20.10 port 5003
[1836] local 10.0.0.10 port 1315 connected with 10.0.20.10 port 5003
[1820] local 10.0.0.10 port 1316 connected with 10.0.20.10 port 5003
[1804] local 10.0.0.10 port 1317 connected with 10.0.20.10 port 5003
[1788] local 10.0.0.10 port 1318 connected with 10.0.20.10 port 5003
[1772] local 10.0.0.10 port 1319 connected with 10.0.20.10 port 5003
[ ID] Interval Transfer Bandwidth
[1868] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1820] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1772] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1836] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1916] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1900] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1852] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1804] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1788] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[1884] 0.0-60.0 sec 384 KBytes 52.4 Kbits/sec
[SUM] 0.0-60.0 sec 3.75 MBytes 524 Kbits/sec

This is one of the very valid reasons Service Providers hate peer-to-peer file sharing services like BitTorrent.

3 comments:

  1. Cisco's QoS SRND hasn't been updated since 2005 (we're still at version 3.3). Do you figure this HQF warrants an update, or is the best practice still to use the previous queueing methodologies?

    Thanks.
  2. Based on what I've seen so far, I would much prefer HQF over older QoS implementations. However, the installed base is probably pretty thin at the moment.
  3. This is great work, and I'm excited about learning enough of this to roll it out in my network. Does fair-queueing scale? What if instead of 10 flows, you had 100? Would each get 6kbps? What about 1000 (0.6kbps)? At some point the queue-limit kicks in, does it not?
Add comment
Sidebar