Category: QoS
Congestion Control Algorithms Are Not Fair
Creating a mathematical model of queuing in a distributed system is hard (Queuing Theory was one of the most challenging ipSpace.net webinars so far), and so instead of solutions based on control theory and mathematical models we often get what seems to be promising stuff.
Things that look intuitively promising aren’t always what we expect them to be, at least according to an MIT group that analyzed delay-bounding TCP congestion control algorithms (CCA) and found that most of them result in unfair distribution of bandwidth across parallel flows in scenarios that diverge from spherical cow in vacuum. Even worse, they claim that:
[…] Our paper provides a detailed model and rigorous proof that shows how all delay-bounding, delay-convergent CCAs must suffer from such problems.
It seems QoS will remain spaghetti-throwing black magic for a bit longer…
Repost: Buffers, Congestion, Jitter, and Shapers
Béla Várkonyi left a great comment on a blog post discussing (among other things) whether we need large buffers on spine switches. I don’t know how many people read the comments; this one is too valuable to be lost somewhere below the fold
You might want to add another consideration. If you have a lot of traffic aggregation even when the ingress and egress port are roughly at the same speed or when the egress port has more capacity, you could still have congestion. Then you have two strategies, buffer and suffer jitter and delay, or drop and hope that the upper layers will detect it and reduce the sending by shaping.
Worth Reading: The State of fq_codel (and Bufferbloat)
Erik Auerswald sent me a pointer to a blog post by Dave Taht: The state of fq_codel and sch_cake worldwide. It’s so nice to see what a huge impact Dave made since he started the Bufferbloat project.
Hint: if you have no idea what Bufferbloat or fq_codel are, you REALLY SHOULD explore Dave’s web site.
Worth Reading: End-to-end Congestion Control Cannot Avoid Latency Spikes
Found a pointer to another you cannot beat the laws of physics or networking result: you cannot avoid latency spikes with end-to-end congestion control regardless of the amount of unicorn dust or hype you’re throwing at the problem (original paper).
Mythbusting: NFV Data Center Fabric Buffering Requirements
Every now and then I stumble upon an article or a comment explaining how Network Function Virtualization (NFV) introduces new data center fabric buffering requirements. Here’s a recent example:
For Telco/carrier Cloud environments, where NFVs (which are much slower than hardware SGW) get used a lot, latency is higher with a lot of jitter due to the nature of software and the varying link speeds, so DC-level near-zero buffer is not applicable.
It seems to me we’re dealing with another myth. Starting with the basics:
… updated on Monday, May 24, 2021 12:05 UTC
Packet Bursts in Data Center Fabrics
When I wrote about the (non)impact of switching latency, I was (also) thinking about packet bursts jamming core data center fabric links when I mentioned the elephants in the room… but when I started writing about them, I realized they might be yet another red herring (together with the supposed need for large buffers in data center switches).
Here’s how it looks like from my ignorant perspective when considering a simple leaf-and-spine network like the one in the following diagram. Please feel free to set me straight, I honestly can’t figure out where I went astray.
Repost: Using MP-TCP to Utilize Unequal Links
In the Does Unequal-Cost Multipathing Make Sense blog post I wrote (paraphrased):
The trick to successful utilization of unequal uplinks is to use them wisely […] It’s how multipath TCP (MP-TCP) could be used for latency-critical applications like Siri.
Minh Ha quickly pointed out (some) limitations of MP-TCP and as is usually the case, his comment was too valuable to be left as a small print at the bottom of a blog post.
Reviving Old Content, Part 2
Continuing my archeological explorations, I found a dusty bag of old QoS content:
- Queuing Principles
- QoS Policing
- Traffic Shaping
- Impact of Transmit Ring Size (tx-ring-limit)
- FIFO Queuing
- Fair Queuing in Cisco IOS
I kept digging and turned out a few MPLS, BGP and ADSL nuggets worth saving:
Can We Trust Server DSCP Marking?
A reader of my blog sent me this question:
Do you think we can trust DSCP marking on servers (whether on DC or elsewhere - Windows or Linux )?
As they say “not as far as you can throw them”.
Does that mean that the network should do application recognition and marking on the ingress network node? Absolutely not, although the switch- and router vendors adore the idea of solving all problems on their boxes.
Fruit Drops and Packet Drops
Urban legends claim that Sir Isaac Newton started thinking about gravity when an apple dropped on his head. Regardless of its origins, his theory successfully predicted planetary motions and helped us get people to the moon… there was just this slight problem with Mercury’s precession.
Likewise, his laws of motion worked wonderfully until someone started crashing very small objects together at very high speeds, or decided to see what happens when you give electrons two slits to go through.
Then there was the tiny problem of light traveling at the same speed in all directions… even on objects moving in different directions.
Switch Buffer Sizes and Fermi Estimates
In my quest to understand how much buffer space we really need in high-speed switches I encountered an interesting phenomenon: we no longer have the gut feeling of what makes sense, sometimes going as far as assuming that 16 MB (or 32MB) of buffer space per 10GE/25GE data center ToR switch is another $vendor shenanigan focused on cutting cost. Time for another set of Fermi estimates.
Let’s take a recent data center switch using Trident II+ chipset and having 16 MB of buffer space (source: awesome packet buffers page by Jim Warner). Most of switches using this chipset have 48 10GE ports and 4-6 uplinks (40GE or 100GE).
Do Packet Drops Matter for TCP Performance?
Approximately two years ago I tried to figure out whether aggressive marketing of deep buffer data center switches makes sense, recorded a few podcasts on the topic and organized a webinar with JR Rivers.
Not surprisingly, the question keeps popping up, so it seems it’s time for another series of TL&DR articles. Let’s start with the basics:
Cisco and Apple Agree: QoS Marking Is an Application Problem
The last presentation during the Tech Field Day Extra @ Cisco Live Europe event was a Cisco-Apple Partnership presentation, and we expected an hour of corporate marketese.
Can’t tell you how pleasantly surprised we were when Jerome Henry started his very technical presentation explaining the wireless goodies you get when using iOS with IOS.
To Drop or To Delay, That’s the Question on Software Gone Wild
A while ago I decided it's time to figure out whether it's better to drop or to delay TCP packets, and quickly figured out you get 12 opinions (usually with no real arguments supporting them) if you ask 10 people. Fortunately, I know someone who deals with TCP performance for living, and Juho Snellman was kind enough to agree to record another podcast.
Policing or Shaping? It Depends
One of my readers watched my TCP, HTTP and SPDY webinar and disagreed with my assertion that shaping sometimes works better than policing.
TL&DR summary: policing = dropping excess packets, shaping = delaying excess packets.
Here’s the picture he sent me (watch the video to get the context and read this article to get the background details):