Category: QoS
Worth Exploring: LibreQoS
Erik Auerswald pointed me to an interesting open-source project. LibreQoS implements decent QoS using software switching on many-core x86 platforms. It’s implemented as a bump-in-the-wire software solution, so you should be able to plug it into your network just before a major congestion point and let it handle the packet dropping and prioritization.
Obviously, the concept is nothing new. I wrote about a similar problem in xDSL networks in 2009.
Stop the Network-Based Application Recognition Nonsense
One of my readers sent me an interesting update on the post-QUIC round of NBAR whack-a-mole (TL&DR: everything is better with Bluetooth AI):
So far, so good. However, whenever there’s a change, there’s an opportunity for marketing FUD, coming from the usual direction.
Worth Reading: Things We Know about Network Queues
Every time someone tries to persuade you to buy (expensive) big-buffer data center switches, take an antidote: the Things we (finally) know about network queues article by Avery Pennarun.
Worth Reading: Unbloating the Buffers
In case you’ve heard about bufferbloat but don’t know what it is: Dan Groshev wrote a nice bufferbloat for dummies blog post on the APNIC blog.
Congestion Control Algorithms Are Not Fair
Creating a mathematical model of queuing in a distributed system is hard (Queuing Theory was one of the most challenging ipSpace.net webinars so far), and so instead of solutions based on control theory and mathematical models we often get what seems to be promising stuff.
Things that look intuitively promising aren’t always what we expect them to be, at least according to an MIT group that analyzed delay-bounding TCP congestion control algorithms (CCA) and found that most of them result in unfair distribution of bandwidth across parallel flows in scenarios that diverge from spherical cow in vacuum. Even worse, they claim that:
[…] Our paper provides a detailed model and rigorous proof that shows how all delay-bounding, delay-convergent CCAs must suffer from such problems.
It seems QoS will remain spaghetti-throwing black magic for a bit longer…
Repost: Buffers, Congestion, Jitter, and Shapers
Béla Várkonyi left a great comment on a blog post discussing (among other things) whether we need large buffers on spine switches. I don’t know how many people read the comments; this one is too valuable to be lost somewhere below the fold
You might want to add another consideration. If you have a lot of traffic aggregation even when the ingress and egress port are roughly at the same speed or when the egress port has more capacity, you could still have congestion. Then you have two strategies, buffer and suffer jitter and delay, or drop and hope that the upper layers will detect it and reduce the sending by shaping.
Worth Reading: The State of fq_codel (and Bufferbloat)
Erik Auerswald sent me a pointer to a blog post by Dave Taht: The state of fq_codel and sch_cake worldwide. It’s so nice to see what a huge impact Dave made since he started the Bufferbloat project.
Hint: if you have no idea what Bufferbloat or fq_codel are, you REALLY SHOULD explore Dave’s web site.
Worth Reading: End-to-end Congestion Control Cannot Avoid Latency Spikes
Found a pointer to another you cannot beat the laws of physics or networking result: you cannot avoid latency spikes with end-to-end congestion control regardless of the amount of unicorn dust or hype you’re throwing at the problem (original paper).
Mythbusting: NFV Data Center Fabric Buffering Requirements
Every now and then I stumble upon an article or a comment explaining how Network Function Virtualization (NFV) introduces new data center fabric buffering requirements. Here’s a recent example:
For Telco/carrier Cloud environments, where NFVs (which are much slower than hardware SGW) get used a lot, latency is higher with a lot of jitter due to the nature of software and the varying link speeds, so DC-level near-zero buffer is not applicable.
It seems to me we’re dealing with another myth. Starting with the basics:
… updated on Monday, May 24, 2021 12:05 UTC
Packet Bursts in Data Center Fabrics
When I wrote about the (non)impact of switching latency, I was (also) thinking about packet bursts jamming core data center fabric links when I mentioned the elephants in the room… but when I started writing about them, I realized they might be yet another red herring (together with the supposed need for large buffers in data center switches).
Here’s how it looks like from my ignorant perspective when considering a simple leaf-and-spine network like the one in the following diagram. Please feel free to set me straight, I honestly can’t figure out where I went astray.
Repost: Using MP-TCP to Utilize Unequal Links
In the Does Unequal-Cost Multipathing Make Sense blog post I wrote (paraphrased):
The trick to successful utilization of unequal uplinks is to use them wisely […] It’s how multipath TCP (MP-TCP) could be used for latency-critical applications like Siri.
Minh Ha quickly pointed out (some) limitations of MP-TCP and as is usually the case, his comment was too valuable to be left as a small print at the bottom of a blog post.
Reviving Old Content, Part 2
Continuing my archeological explorations, I found a dusty bag of old QoS content:
- Queuing Principles
- QoS Policing
- Traffic Shaping
- Impact of Transmit Ring Size (tx-ring-limit)
- FIFO Queuing
- Fair Queuing in Cisco IOS
I kept digging and turned out a few MPLS, BGP, and ADSL nuggets worth saving:
Can We Trust Server DSCP Marking?
A reader of my blog sent me this question:
Do you think we can trust DSCP marking on servers (whether on DC or elsewhere - Windows or Linux )?
As they say “not as far as you can throw them”.
Does that mean that the network should do application recognition and marking on the ingress network node? Absolutely not, although the switch- and router vendors adore the idea of solving all problems on their boxes.
Fruit Drops and Packet Drops
Urban legends claim that Sir Isaac Newton started thinking about gravity when an apple dropped on his head. Regardless of its origins, his theory successfully predicted planetary motions and helped us get people to the moon… there was just this slight problem with Mercury’s precession.
Likewise, his laws of motion worked wonderfully until someone started crashing very small objects together at very high speeds, or decided to see what happens when you give electrons two slits to go through.
Then there was the tiny problem of light traveling at the same speed in all directions… even on objects moving in different directions.
Switch Buffer Sizes and Fermi Estimates
In my quest to understand how much buffer space we really need in high-speed switches I encountered an interesting phenomenon: we no longer have the gut feeling of what makes sense, sometimes going as far as assuming that 16 MB (or 32MB) of buffer space per 10GE/25GE data center ToR switch is another $vendor shenanigan focused on cutting cost. Time for another set of Fermi estimates.
Let’s take a recent data center switch using Trident II+ chipset and having 16 MB of buffer space (source: awesome packet buffers page by Jim Warner). Most of switches using this chipset have 48 10GE ports and 4-6 uplinks (40GE or 100GE).