Category: QoS

Worth Exploring: LibreQoS

Erik Auerswald pointed me to an interesting open-source project. LibreQoS implements decent QoS using software switching on many-core x86 platforms. It’s implemented as a bump-in-the-wire software solution, so you should be able to plug it into your network just before a major congestion point and let it handle the packet dropping and prioritization.

Obviously, the concept is nothing new. I wrote about a similar problem in xDSL networks in 2009.

add comment

Stop the Network-Based Application Recognition Nonsense

One of my readers sent me an interesting update on the post-QUIC round of NBAR whack-a-mole (TL&DR: everything is better with Bluetooth AI):

Cloudflare (and the other hyperscalers) are full into QUIC, as it gives them lots of E2E control, taking a lot of choice away from the service providers on how they handle traffic and congestion. It is quite well outlined by Geoff Huston in an APNIC podcast.

So far, so good. However, whenever there’s a change, there’s an opportunity for marketing FUD, coming from the usual direction.

read more see 1 comments

Congestion Control Algorithms Are Not Fair

Creating a mathematical model of queuing in a distributed system is hard (Queuing Theory was one of the most challenging ipSpace.net webinars so far), and so instead of solutions based on control theory and mathematical models we often get what seems to be promising stuff.

Things that look intuitively promising aren’t always what we expect them to be, at least according to an MIT group that analyzed delay-bounding TCP congestion control algorithms (CCA) and found that most of them result in unfair distribution of bandwidth across parallel flows in scenarios that diverge from spherical cow in vacuum. Even worse, they claim that:

[…] Our paper provides a detailed model and rigorous proof that shows how all delay-bounding, delay-convergent CCAs must suffer from such problems.

It seems QoS will remain spaghetti-throwing black magic for a bit longer…

see 1 comments

Repost: Buffers, Congestion, Jitter, and Shapers

Béla Várkonyi left a great comment on a blog post discussing (among other things) whether we need large buffers on spine switches. I don’t know how many people read the comments; this one is too valuable to be lost somewhere below the fold


You might want to add another consideration. If you have a lot of traffic aggregation even when the ingress and egress port are roughly at the same speed or when the egress port has more capacity, you could still have congestion. Then you have two strategies, buffer and suffer jitter and delay, or drop and hope that the upper layers will detect it and reduce the sending by shaping.

read more add comment

Mythbusting: NFV Data Center Fabric Buffering Requirements

Every now and then I stumble upon an article or a comment explaining how Network Function Virtualization (NFV) introduces new data center fabric buffering requirements. Here’s a recent example:

For Telco/carrier Cloud environments, where NFVs (which are much slower than hardware SGW) get used a lot, latency is higher with a lot of jitter due to the nature of software and the varying link speeds, so DC-level near-zero buffer is not applicable.

It seems to me we’re dealing with another myth. Starting with the basics:

read more see 3 comments

Packet Bursts in Data Center Fabrics

When I wrote about the (non)impact of switching latency, I was (also) thinking about packet bursts jamming core data center fabric links when I mentioned the elephants in the room… but when I started writing about them, I realized they might be yet another red herring (together with the supposed need for large buffers in data center switches).

Here’s how it looks like from my ignorant perspective when considering a simple leaf-and-spine network like the one in the following diagram. Please feel free to set me straight, I honestly can’t figure out where I went astray.

read more see 6 comments

Repost: Using MP-TCP to Utilize Unequal Links

In the Does Unequal-Cost Multipathing Make Sense blog post I wrote (paraphrased):

The trick to successful utilization of unequal uplinks is to use them wisely […] It’s how multipath TCP (MP-TCP) could be used for latency-critical applications like Siri.

Minh Ha quickly pointed out (some) limitations of MP-TCP and as is usually the case, his comment was too valuable to be left as a small print at the bottom of a blog post.

Intuitively I don’t necessarily agree with all of his conclusions, but don’t know enough to have a qualified opinion.
read more see 2 comments

Can We Trust Server DSCP Marking?

A reader of my blog sent me this question:

Do you think we can trust DSCP marking on servers (whether on DC or elsewhere - Windows or Linux )?

As they say “not as far as you can throw them”.

Does that mean that the network should do application recognition and marking on the ingress network node? Absolutely not, although the switch- and router vendors adore the idea of solving all problems on their boxes.

read more see 1 comments

Fruit Drops and Packet Drops

Urban legends claim that Sir Isaac Newton started thinking about gravity when an apple dropped on his head. Regardless of its origins, his theory successfully predicted planetary motions and helped us get people to the moon… there was just this slight problem with Mercury’s precession.

Likewise, his laws of motion worked wonderfully until someone started crashing very small objects together at very high speeds, or decided to see what happens when you give electrons two slits to go through.

Then there was the tiny problem of light traveling at the same speed in all directions… even on objects moving in different directions.

read more see 2 comments

Switch Buffer Sizes and Fermi Estimates

In my quest to understand how much buffer space we really need in high-speed switches I encountered an interesting phenomenon: we no longer have the gut feeling of what makes sense, sometimes going as far as assuming that 16 MB (or 32MB) of buffer space per 10GE/25GE data center ToR switch is another $vendor shenanigan focused on cutting cost. Time for another set of Fermi estimates.

Let’s take a recent data center switch using Trident II+ chipset and having 16 MB of buffer space (source: awesome packet buffers page by Jim Warner). Most of switches using this chipset have 48 10GE ports and 4-6 uplinks (40GE or 100GE).

read more see 8 comments
Sidebar