HQF: intra-class WFQ ignores the IP precedence
Continuing my tests of the Hierarchical Queuing Framework, I’ve tested whether the fair queuing works similarly to the previous IOS implementations, where the high-precedence sessions got proportionally more bandwidth.
Summary: Fair queuing within HQF ignores IP precedence.
Setup
I ran an iperf session in parallel with a UDP flood using the same physical setup as with the previous HQF tests. I’ve configured an inbound service-policy on the LAN interface to ensure the packets belonging to TCP sessions had their IP precedence set to 5 (the default value is zero).
policy-map LAN class TCP set ip precedence 5 ! interface FastEthernet0/0 ip address 10.0.0.5 255.255.255.0 no ip redirects load-interval 30 service-policy input LAN
Baseline tests
I configured weighted fair queuing on a 2 Mbps WAN interface:
interface Serial0/1/0 bandwidth 2000 ip address 10.0.1.1 255.255.255.252 encapsulation ppp fair-queue ip ospf 1 area 0 load-interval 30
As expected, the iperf session ran in parallel with a UDP flood got more than half the available bandwidth:
$ iperf -c 10.0.20.10 -t 180 -p 5002 ------------------------------------------------------------ Client connecting to 10.0.20.10, TCP port 5002 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1916] local 10.0.0.10 port 1064 connected with 10.0.20.10 port 5002 [ ID] Interval Transfer Bandwidth [1916] 0.0-180.1 sec 35.2 MBytes 1.64 Mbits/sec
The show queue command (which is still available in IOS release 15.0 unless you configure HQF on the interface) displayed two sessions within the WFQ system with expected TOS values and weights (the weight of the TCP session was 6 times lower than the weight of the UDP session):
rtr#show queue serial 0/1/0 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 135567 Queueing strategy: weighted fair Output queue: 64/1000/64/12798 (size/max total/threshold/drops) Conversations 2/3/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1500 kilobits/sec (depth/weight/total drops/no-buffer drops/interleaves) 57/32384/12799/0/0 Conversation 10, linktype: ip, length: 260 source: 10.0.0.10, destination: 10.0.20.3, id: 0xB1C1, ttl: 127, TOS: 0 prot: 17, source port 1059, destination port 5002 (depth/weight/total drops/no-buffer drops/interleaves) 7/5397/0/0/0 Conversation 11, linktype: ip, length: 1304 source: 10.0.0.10, destination: 10.0.20.10, id: 0xB382, ttl: 127, TOS: 160 prot: 6, source port 1064, destination port 5002
Fair queuing within HQF
To test the fair queuing behavior within HQF I configured a simple policy map which used fair-queue within the default class:
policy-map WAN class class-default fair-queue
I attached the policy map to the interface …
interface Serial0/1/0 service-policy output WAN
… and repeated the iperf test:
$ iperf -c 10.0.20.10 -t 60 -p 5002 ------------------------------------------------------------ Client connecting to 10.0.20.10, TCP port 5002 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1916] local 10.0.0.10 port 1062 connected with 10.0.20.10 port 5002 [ ID] Interval Transfer Bandwidth [1916] 0.0-60.0 sec 7.18 MBytes 956 Kbits/sec
This time the TCP session got only half of the bandwidth although its packets had higher IP precedence.
I think this is exactly what Cisco has been stating in their documentation - fair queue no longer honors QoS marking, i.e. no longer is WFQ. I wonder if they are using stochastic fair queueing now in place of WFQ's byte counting.
ReplyDeleteSo now we know the documentation matches the implementation ;) Not always true ... O:-)
ReplyDeleteSince you are doing QoS tests, perhaps it would be nice to test also the qos pre-classify.
ReplyDeleteWhat I see is that when you use qos pre-classify the policy-map on the physical interface counts the packet sizes before encryption which I'm not so sure it is correct, and in any case it's a bit confusing, especially when you use also compression etc.
probably this is a bug, it happens on direct encapsulation ipsec, but not in vti
ReplyDelete