HQF: truly hierarchical queuing

After doing the initial tests of the HQF framework, I wanted to check how “hierarchical” it is. I’ve created a policy-map (as before) allocating various bandwidth percentages to individual TCP/UDP ports. One of the classes had a child service policy that allocated 70% of the bandwidth to TCP and 30% of the bandwidth to UDP (going to the same port#), with fair queuing being used in the TCP subclass.

Short summary: HQF worked brilliantly.

Here’s the relevant configuration: the per-interface policy …

policy-map WAN
 class P5001
   bandwidth percent 20
   fair-queue
 class P5003
   bandwidth percent 30
   service-policy Intra
 class class-default
   fair-queue

… and the child policy:

policy-map Intra
 class TCP
   bandwidth percent 70
   fair-queue
 class class-default
   bandwidth percent 30

You’d have to create the child policy first, IOS would not allow you to attach a non-existent policy-map as a service-policy within a class.

I’ve applied the policy-map to a 2 Mbps point-to-point link and started various traffic sources (similar to the previous tests).

Please read the previous HQF-related posts to find a detailed lab description, complete router configurations and traffic source descriptions.

As before, the show policy-map interface command can be used to inspect the QoS state of an interface. The printout clearly documents the hierarchical queuing policies and shows

a1#show policy-map interface ser 0/1/0
 Serial0/1/0

  Service-policy output: WAN

    Class-map: P5001 (match-all)
      303 packets, 367480 bytes
      30 second offered rate 409000 bps, drop rate 15000 bps
      Match: access-group name P5001
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops/flowdrops) 111/0/0/0
      (pkts output/bytes output) 303/367480
      bandwidth 20% (400 kbps)
      Fair-queue: per-flow queue limit 16

    Class-map: P5003 (match-all)
      5407 packets, 1706420 bytes
      30 second offered rate 2059000 bps, drop rate 810000 bps
      Match: access-group name P5003
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 174/4465/0
      (pkts output/bytes output) 945/546300
      bandwidth 30% (600 kbps)

      Service-policy : Intra

        Class-map: TCP (match-all)
          315 packets, 382500 bytes
          30 second offered rate 418000 bps, drop rate 11000 bps
          Match: access-group name TCP
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops/flowdrops) 107/0/0/0
          (pkts output/bytes output) 315/382500
          bandwidth 70% (420 kbps)
          Fair-queue: per-flow queue limit 16

        Class-map: class-default (match-any)
          5092 packets, 1323920 bytes
          30 second offered rate 1620000 bps, drop rate 1500000 bps
          Match: any
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 64/4466/0
          (pkts output/bytes output) 630/163800
          bandwidth 30% (180 kbps)

    Class-map: class-default (match-any)
      5097 packets, 1325220 bytes
      30 second offered rate 1635000 bps, drop rate 733000 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops/flowdrops) 18/1605/0/1605
      (pkts output/bytes output) 3496/908540
      Fair-queue: per-flow queue limit 16

The results of the iperf program matched the expectations very well: P5003 class gets 30% of 2Mbps (600 kbps) and the TCP traffic gets 70% of that (420 kbps). With the TCP overhead, the goodput should be slightly higher than 400 kbps (and it is).

$ iperf -c 10.0.20.10 -t 3600 -p 5003 -i 60 -P 10
------------------------------------------------------------
Client connecting to 10.0.20.10, TCP port 5003
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
…
[SUM] 480.0-540.0 sec  2.87 MBytes   401 Kbits/sec

4 comments:

  1. 2059000 - 810000 = 1249000
    It is twice more than 600kbps

    Class-map: P5003 (match-all)
    5407 packets, 1706420 bytes
    30 second offered rate 2059000 bps, drop rate 810000 bps

    ReplyDelete
  2. What is the maximum rate at which this can be applied. On the 7200 I tested it maxes out at 100 Mb/s. Is there another IOS or HW combination that can scale higher?

    ReplyDelete
  3. You can probably get more throughput on dedicated hardware (like the hugely expensive blades for the 7600 series) or more powerful boxes (ASR comes to mind). I doubt you could squeeze more out of a 7200 with a different IOS release (assuming you want to run HQF). It doesn't have the latest or the fastest CPU ever invented 8-)

    ReplyDelete
  4. I noticed that too. I think it's cosmetic because if you look farther down at the child (Intra) policy, and use the drop rates from both of those the math works out as expected. If it's not just a cosmetic issue with the drop rate shown on the parent policy than there is another layer of things I'm not understanding. :)

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Ivan Pepelnjak, CCIE#1354, is the chief technology advisor for NIL Data Communications. He has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced technologies since 1990. See his full profile, contact him or follow @ioshints on Twitter.