In the Does Unequal-Cost Multipathing Make Sense blog post I wrote (paraphrased):
The trick to successful utilization of unequal uplinks is to use them wisely […] It’s how multipath TCP (MP-TCP) could be used for latency-critical applications like Siri.
Minh Ha quickly pointed out (some) limitations of MP-TCP and as is usually the case, his comment was too valuable to be left as a small print at the bottom of a blog post.
Continuing my archeological explorations, I found a dusty bag of old QoS content:
- Queuing Principles
- QoS Policing
- Traffic Shaping
- Impact of Transmit Ring Size (tx-ring-limit)
- FIFO Queuing
- Fair Queuing in Cisco IOS
I kept digging and turned out a few MPLS, BGP and ADSL nuggets worth saving:
A reader of my blog sent me this question:
Do you think we can trust DSCP marking on servers (whether on DC or elsewhere - Windows or Linux )?
As they say “not as far as you can throw them”.
Does that mean that the network should do application recognition and marking on the ingress network node? Absolutely not, although the switch- and router vendors adore the idea of solving all problems on their boxes.
Urban legends claim that Sir Isaac Newton started thinking about gravity when an apple dropped on his head. Regardless of its origins, his theory successfully predicted planetary motions and helped us get people to the moon… there was just this slight problem with Mercury’s precession.
Likewise, his laws of motion worked wonderfully until someone started crashing very small objects together at very high speeds, or decided to see what happens when you give electrons two slits to go through.
Then there was the tiny problem of light traveling at the same speed in all directions… even on objects moving in different directions.
In my quest to understand how much buffer space we really need in high-speed switches I encountered an interesting phenomenon: we no longer have the gut feeling of what makes sense, sometimes going as far as assuming that 16 MB (or 32MB) of buffer space per 10GE/25GE data center ToR switch is another $vendor shenanigan focused on cutting cost. Time for another set of Fermi estimates.
Let’s take a recent data center switch using Trident II+ chipset and having 16 MB of buffer space (source: awesome packet buffers page by Jim Warner). Most of switches using this chipset have 48 10GE ports and 4-6 uplinks (40GE or 100GE).
Approximately two years ago I tried to figure out whether aggressive marketing of deep buffer data center switches makes sense, recorded a few podcasts on the topic and organized a webinar with JR Rivers.
Not surprisingly, the question keeps popping up, so it seems it’s time for another series of TL&DR articles. Let’s start with the basics:
The last presentation during the Tech Field Day Extra @ Cisco Live Europe event was a Cisco-Apple Partnership presentation, and we expected an hour of corporate marketese.
Can’t tell you how pleasantly surprised we were when Jerome Henry started his very technical presentation explaining the wireless goodies you get when using iOS with IOS.
A while ago I decided it's time to figure out whether it's better to drop or to delay TCP packets, and quickly figured out you get 12 opinions (usually with no real arguments supporting them) if you ask 10 people. Fortunately, I know someone who deals with TCP performance for living, and Juho Snellman was kind enough to agree to record another podcast.
One of my readers watched my TCP, HTTP and SPDY webinar and disagreed with my assertion that shaping sometimes works better than policing.
TL&DR summary: policing = dropping excess packets, shaping = delaying excess packets.
When someone tells you that “TCP is a lossy protocol” during a job interview, don’t throw him out immediately – he was just trusting the Internet a bit too much (click to enlarge).
Everyone has a bad hair day, and it really doesn’t matter who published that text… but if you’re publishing technical information, at least try to do no harm.
A. Friend sent me a long list of questions after listening to excellent Future of Networking podcast with Martin Casado because (as he said) he prefers “having a technical discussion with arguments and not just throwing statements out there.”
He started with “Martin's view seems to be that network is all plumbing and all the intelligence should be in the applications.”
When I asked “Are there any truly QoS-aware routing protocols out there?” in one of my SD-WAN posts, Marcelo Spohn from ADARA Networks quickly pointed out that they have one – Dynamic Link-State Routing Protocol.
He also claimed that DLSP has no scalability concerns – more than enough reasons to schedule an online chat, resulting in Episode 40 of Software Gone Wild. We didn’t go too deep this time, but you should get a nice overview of what DLSP is and how it works.
Ethan Banks recently wrote a nice blog post detailing the benefits and drawbacks of traditional routing protocols and comparing them with their SD-WAN counterparts.
While I agree with everything he wrote, the comparison between the two isn’t exactly fair – it’s a bit like trying to cut the cheese with a chainsaw and complaining about the resulting waste.
Whenever I get asked about QoS in the data center, my stock reply is “bandwidth is cheaper than QoS-induced complexity.” This is definitely true in most cases, and ideally the elephant problems should be solved higher up in the application stack, not with network-layer kludges, but are there situations where you actually need data center QoS?