Fruit Drops and Packet Drops
Urban legends claim that Sir Isaac Newton started thinking about gravity when an apple dropped on his head. Regardless of its origins, his theory successfully predicted planetary motions and helped us get people to the moon… there was just this slight problem with Mercury’s precession.
Likewise, his laws of motion worked wonderfully until someone started crashing very small objects together at very high speeds, or decided to see what happens when you give electrons two slits to go through.
Then there was the tiny problem of light traveling at the same speed in all directions… even on objects moving in different directions.
You probably know that modern physics resolved these challenges with general relativity, quantum physics, and special relativity… and still hasn’t figured out how to combine the three into a single Theory of Everything. The only problem for someone stuck at the level of high-school physics (like myself) is that these theories tend to be a bit more complex (and thus harder to understand and use) than the simple laws of motion and gravity.
We’re facing a similar challenge in many other disciplines: lacking a grand unifying theory of everything we have to use approximations, and they always have limited applicability.
Taking the example of impact of packet drops and shallow buffers I discussed last summer (and numerous comments these blog posts generated):
- TCP packet drops are a good thing… unless you’re in a distributed computing environment where microseconds of delay can stall your computation;
- Controlling TCP packet drops with congestion avoidance algorithms like WRED is an effective way of managing congestion unless you’re dealing with a small number of high-volume TCP sessions (example: storage traffic);
- Modern Active Queue Management solutions are supposed to be even better, but are probably hard to implement at high speeds (or I’ve yet again missed something, in which case please write a comment).
- Shallow buffers are good enough… unless you have to provide lossless environment (see above) or are dealing with few high-volume transfers over high-latency environment (example: 10 Gbps transfer with 100 msec RTT).
Does that mean that TCP packet drops are bad, or that we should all buy more expensive deep buffer switches? Absolutely not, but you have to understand that lacking better tools (and/or being ignorant of underlying math) we’re using approximations (usually called best current practices because that sounds so much better), and no approximation is applicable across a very wide range of conditions.
Keep smiling
Andrea