HQF: Truly Hierarchical Queuing
After doing the initial tests of the HQF framework, I wanted to check how “hierarchical” it is. I’ve created a policy-map (as before) allocating various bandwidth percentages to individual TCP/UDP ports. One of the classes had a child service policy that allocated 70% of the bandwidth to TCP and 30% of the bandwidth to UDP (going to the same port#), with fair queuing being used in the TCP subclass.
Short summary: HQF worked brilliantly.
Here’s the relevant configuration: the per-interface policy …
policy-map WAN
class P5001
bandwidth percent 20
fair-queue
class P5003
bandwidth percent 30
service-policy Intra
class class-default
fair-queue
… and the child policy:
policy-map Intra
class TCP
bandwidth percent 70
fair-queue
class class-default
bandwidth percent 30
You’d have to create the child policy first, IOS would not allow you to attach a non-existent policy-map as a service-policy within a class.
I’ve applied the policy-map to a 2 Mbps point-to-point link and started various traffic sources (similar to the previous tests).
Please read the previous HQF-related posts to find a detailed lab description, complete router configurations and traffic source descriptions.
As before, the show policy-map interface command can be used to inspect the QoS state of an interface. The printout clearly documents the hierarchical queuing policies and shows
a1#show policy-map interface ser 0/1/0
Serial0/1/0
Service-policy output: WAN
Class-map: P5001 (match-all)
303 packets, 367480 bytes
30 second offered rate 409000 bps, drop rate 15000 bps
Match: access-group name P5001
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 111/0/0/0
(pkts output/bytes output) 303/367480
bandwidth 20% (400 kbps)
Fair-queue: per-flow queue limit 16
Class-map: P5003 (match-all)
5407 packets, 1706420 bytes
30 second offered rate 2059000 bps, drop rate 810000 bps
Match: access-group name P5003
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 174/4465/0
(pkts output/bytes output) 945/546300
bandwidth 30% (600 kbps)
Service-policy : Intra
Class-map: TCP (match-all)
315 packets, 382500 bytes
30 second offered rate 418000 bps, drop rate 11000 bps
Match: access-group name TCP
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 107/0/0/0
(pkts output/bytes output) 315/382500
bandwidth 70% (420 kbps)
Fair-queue: per-flow queue limit 16
Class-map: class-default (match-any)
5092 packets, 1323920 bytes
30 second offered rate 1620000 bps, drop rate 1500000 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 64/4466/0
(pkts output/bytes output) 630/163800
bandwidth 30% (180 kbps)
Class-map: class-default (match-any)
5097 packets, 1325220 bytes
30 second offered rate 1635000 bps, drop rate 733000 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 18/1605/0/1605
(pkts output/bytes output) 3496/908540
Fair-queue: per-flow queue limit 16
The results of the iperf program matched the expectations very well: P5003 class gets 30% of 2Mbps (600 kbps) and the TCP traffic gets 70% of that (420 kbps). With the TCP overhead, the goodput should be slightly higher than 400 kbps (and it is).
$ iperf -c 10.0.20.10 -t 3600 -p 5003 -i 60 -P 10
------------------------------------------------------------
Client connecting to 10.0.20.10, TCP port 5003
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
…
[SUM] 480.0-540.0 sec 2.87 MBytes 401 Kbits/sec
It is twice more than 600kbps
Class-map: P5003 (match-all)
5407 packets, 1706420 bytes
30 second offered rate 2059000 bps, drop rate 810000 bps