First HQF impressions: excellent job

Several readers told me that the Hierarchical Queuing Framework introduced in IOS releases 12.4(20)T and 15.0 (why do I always have the urge to write 12.5?) works much better than CB-WFQ. After spending several hours trying to break HQF, I have to concur with them: Cisco’s engineers did a splendid job. However, the HQF behavior might be slightly counterintuitive to those that became too familiar with CB-WFQ.

For example, faced with this configuration …

policy-map WAN
class P5001
bandwidth percent 20
class P5003
bandwidth percent 30
class class-default

… one might assume that all three classes will get a proportional part of the remaining bandwidth (50%). Not true. An HQF class with a bandwidth allocation gets as much as it’s asked for. It might get more, but if all classes are fully congested, the remaining bandwidth is distributed equally between classes without explicit bandwidth allocation.

When I ran 30 parallel TCP sessions across a 2 Mbps link (10 TCP sessions in each class), I’ve got these results:

a1#show policy-map interface serial 0/1/0 | include map|bps
Class-map: P5001 (match-all)
30 second offered rate 418000 bps, drop rate 21000 bps
bandwidth 20% (400 kbps)
Class-map: P5003 (match-all)
30 second offered rate 613000 bps, drop rate 15000 bps
bandwidth 30% (600 kbps)
Class-map: class-default (match-any)
30 second offered rate 997000 bps, drop rate 0 bps

As you can see, all the remaining bandwidth was used by the best-effort class-default.


I was performing the QoS tests on a 2Mbps PPP link between two 2800-series routers running IOS release 15.0(1)M. The relevant parts of the router configuration are shown below.

Access lists permit TCP and UDP traffic to the same port. I needed a mix of TCP and UDP to test intra-class queuing behavior.

ip access-list extended P5001
permit tcp any any eq 5001
permit tcp any eq 5001 any
permit udp any any eq 5001
permit udp any eq 5001 any
ip access-list extended P5002
permit tcp any any eq 5002
permit tcp any eq 5002 any
permit udp any any eq 5002
permit udp any eq 5002 any
ip access-list extended P5003
permit tcp any any eq 5003
permit tcp any eq 5003 any
permit udp any any eq 5003
permit udp any eq 5003 any

Class maps:

class-map match-all P5001
match access-group name P5001
class-map match-all P5003
match access-group name P5003
class-map match-all P5002
match access-group name P5002

Interface configuration:

interface Serial0/1/0
bandwidth 2000
ip address
encapsulation ppp
ip ospf 1 area 0
load-interval 30
service-policy output WAN

I used iperf to generate TCP load and my own to generate UDP load. The following command was used to start iperf:

$ iperf -c host -t 3600 -p port -i 60 -P 10


  1. Indeed,

    i've been playing with HQF around for some time yesterday and found the same thing - the classes with no explicit bandwidth allocating implicitly get the "remaining" bandwidth guarantee (equally shared between all classes not configured for bandwidth). That's some surprise, if you still stuck with CBFWQ logic. Most important of all, this means the underlying scheduler is far away from being similar to WFQ in any way.

    One other nice thing is hierarchical shaping (shape in parent class accompanied by shape in child policy) and explicit bandwidth allocation by the "shape" command (bandwidth guaranteed equal to the shape rate). This reminds me of SRR algorithm used in 3560/3750 platforms. Hey, by the way, I found some really nice statements in the post you linked in your initial HQF post, such as:

    "Class-Based shaping policy applied to subinterface in HQF code: 512 packets, not tunable (investigate with NSSTG QoS platform Team, should it be tunable)"

    or the answer to my question about CBWFQ class-default fair-queue + random-detect:

    "Conversely, if fair-queue and random-detect are used together in class-default, the queue-limit will be ignored and all flow-queues will share the same WRED thresholds. As such, all packets currently enqueued in all flow-queues will be used to calculate the WRED Average Queue Size. Because the Current Queue Size has no upper limit in this configuration, the opportunity for no-buffer drops is high"

    Furthermore, even though any details on HQF are thoroughly hidden (some of the so-helpful CLI commands are now deprecated), it appears HQF is highly optimized for hardware implementations. And indeed, distributed routing platforms and some multilayer switches now support it.

    Maybe it the first step toward implementing the dream of unified QoS engine! :)
  2. I think this excerpt from Cisco explains why the new QoS behavior in HQF has worked better than the pre-HQF behavior:

    "Allocation of Bandwidth to Class Default

    Old Behavior

    The default class can use up to 25 percent of total available bandwidth; however, the entire 25 percent is not guaranteed. Rather, it is proportionately shared between different flows in the default class and excess traffic from other bandwidth classes. Thus, the amount of bandwidth that the default class will receive depends on a number of factors, including the total number of flows currently in the router, the bandwidth guarantees (or weights) made to the other user-defined classes, and the number of hash queues in the router. To make minimum bandwidth guarantees to the default class, the bandwidth command needs to be explicitly configured under the class in the policy.

    New Behavior

    The class default has a default minimum guarantee that equals the difference between the total available bandwidth (for example, link rate, shaped rate) and the amount of bandwidth guaranteed to the other classes. For example, if 90 percent of bandwidth is allocated to other classes, then the class default is guaranteed the remaining 10 percent. If there is no traffic in the class default, then the other classes share that 10 percent proportionally. Alternatively, the user can explicitly configure the amount of bandwidth that should be available to default class using the bandwidth <x> command. This will lower the guarantee that is given to the class default and allow 10 minus "x" to always be available for the other classes."

    It also appears that in HQF Cisco has, by default, allocated a minimum of 1% bandwidth guarantee to class default. Please see the excerpt below. If this is true that means the summation of all bandwidth allocations in other classes must be equal to or less than 99%. I need to test it in the lab to confirm.


    Old Behavior

    The default maximum reserved bandwidth is 75 percent, so the maximum bandwidth that can be guaranteed to any user-defined class is also 75 percent. If 75 percent of the bandwidth is allocated only for the LLQ, then no minimum bandwidth can be guaranteed to the other classes, and they will share the remaining 25 percent bandwidth with the class default traffic.
    If more bandwidth needs to be allocated, use the max-reserved-bandwidth command to modify the bandwidth amount that can be reserved for user-defined classes.

    New Behavior

    The max-reserved-bandwidth command no longer affects the amount of bandwidth available to a service-policy. 1% must be reserved for the class-default with the rest being available to the users classes. Please also refer to the previous section "Allocation of Bandwidth to Class Default.""

    URL of the excerpts:
  3. Yet again, a slightly misleading explanation from Cisco. The remaining (non-reserved) bandwidth is split equally between all classes without explicit bandwidth guarantees, not given to the class-default.
  4. Agree. I also just confirmed in the lab that in HQF, the IOS did implicitly reserve a minimum of 1% bandwidth for class-default. I tried to enter the following QoS in the router.

    policy-map WAN-QOS
    class TCP
    bandwidth percent 50
    class UDP
    bandwidth percent 30
    class ICMP
    bandwidth percent 20

    As you can see above, the summation of all bandwidth queues are = 100%. Before HQF, the above config was accepted in IOS 12.4(15)T10. In IOS 15.0, I got a rejection saying the total was exceeding 99%, see below:

    C3825(config)#policy-map WAN-QOS
    C3825(config-pmap)# class TCP
    C3825(config-pmap-c)# bandwidth percent 50
    C3825(config-pmap-c)# class UDP
    C3825(config-pmap-c)# bandwidth percent 30
    C3825(config-pmap-c)# class ICMP
    C3825(config-pmap-c)# bandwidth percent 20
    Sum total of class bandwidths exceeds 99 percent

    It looks as though Cisco is now checking the total bandwidth allocations under a policy-map before the policy was applied to ensure class-default would get its minimum of 1%.

    When I ran traffic through the router using a traffic generator using the above config (with bandwidth percent changed to 19% under class ICMP), the generator did receive 1% of traffic coming from class default which confirmed the implicit bandwidth reservation bahavior.
  5. I'm finding that I cannot get the SHAPE portion to work in the new HQF. While it worked great in pre-HQF, post allows all the bandwidth to go through. Here's an example. You will notice that the target shape rate is at 9MBits but it is allowing 10 (which is the max speed that the ISP sends their data through that pipe).

    Class-map: FROM_FIBRE (match-any)
    391978 packets, 399555763 bytes
    30 second offered rate 9849000 bps, drop rate 0 bps
    Match: qos-group 1
    391976 packets, 399550569 bytes
    30 second rate 9849000 bps
    queue limit 32768 packets
    (queue depth/total drops/no-buffer drops) 0/0/0
    (pkts output/bytes output) 18950/2510736
    shape (average) cir 9500000, bc 38000, be 38000
    target shape rate 9500000

    Service-policy : TRAFFIC_SHAPE_OUT

    Any idea why shaping no longer seems to work?
Add comment