Category: networking fundamentals
A lot of people are confused about the roles of network layers (some more than others), the interactions between MAC addresses, IP addresses, and TCP/UDP port numbers, the differences between routing and bridging… and why it’s so bad to bridge across large distances (or in large networks).
I tried to explain most of those topic in How Networks Really Work webinar (next session coming on April 2nd), but as is usually the case someone did a much better job: you MUST READ the poetic and hilariously funny World in which IPv6 was a good design by Avery Pennarun.
After decades of riding the Moore’s law curve the networking bandwidth should be (almost) infinite and (almost) free, right? WRONG, as I explained in the Bandwidth Is (Not) Infinite and Free video (part of How Networks Really Work webinar).
There are still pockets of Internet desert where mobile- or residential users have to deal with traffic caps, and if you decide to move your applications into any public cloud you better check how much bandwidth those applications consume or you’ll be the next victim of the Great Bandwidth Swindle. For more details, watch the video.
After the “shocking” revelation that a network can never be totally reliable, I addressed another widespread lack of common sense: due to laws of physics, the client-server latency is never zero (and never even close to what a developer gets from the laptop’s loopback interface).
After introducing the fallacies of distributed computing in the How Networks Really Work webinar, I focused on the first one: the network is (not) reliable.
While that might be understood by most networking professionals (and ignored by many developers), here’s an interesting shocker: even TCP is not always reliable (see also: Joel Spolsky’s take on Leaky Abstractions).
As always, no good deed goes unpunished - “creative” individuals trying to force-fit their mis-designed star-shaped pegs into round holes, and networking vendors looking for competitive advantage quickly destroyed the idea with tons of middlebox devices, ranging from firewalls and load balancers to NAT, WAN optimization, and DPI monstrosities.
Russ White wrote a great blog post explaining why you have to understand the problem you’re solving instead of blindly believing the $vendor slide deck… or as I said a long time ago, think about how you’ll troubleshoot your network in because you won’t be able to reformat it once it crashes.
Now it’s time to put it all together.
A subscriber sent me this intriguing question:
Is it not theoretically possible for Ethernet frames to be 64k long if ASIC vendors simply bothered or decided to design/make chipsets that supported it? How did we end up in the 1.5k neighborhood? In whose best interest did this happen?
Remember that Ethernet started as a shared-cable 10 Mbps technology. Transmitting a 64k frame on that technology would take approximately 50 msec (or as long as getting from East Coast to West Coast). Also, Ethernet had no tight media access control like Token Ring, so it would be possible for a single host to transmit multiple frames without anyone else getting airtime, resulting in unacceptable delays.
Grouping the features needed in a networking stack in bunch of layered modules is a great idea, but unfortunately it turns out that you could place a number of important features like error recovery, retransmission and flow control in a number of different layers, from data link layer dealing with individual network segments to transport layer dealing with reliable end-to-end transmissions.
So where should we put those modules? As always, the correct answer is it depends, in this particular case on transmission reliability, latency, and cost of bandwidth. You’ll find more details in the Retransmissions and Flow Control part of How Networks Really Work webinar.
Two weeks ago I replied to a battle-scar reaction to 7-layer OSI model, this time I’ll address a much more nuanced view from Russ White. Please read his article first (as always, it’s well worth reading) and when you come back we’ll focus on this claim:
The OSI Model does not accurately describe networks.
Like with any tool in your toolbox, you can view the 7-layer OSI model in a number of ways. In the case of OSI model, it can be used:
After identifying some of the challenges every network solution must address (part 1, part 2, part 3) we tried to tackle an interesting question: “how do you implement this whole spaghetti mess in a somewhat-reliable and structured way?”
The Roman Empire had an answer more than 2000 years ago: divide-and-conquer (aka “eating the elephant one bite at a time”). These days we call it layering and abstractions.
In the Need for Network Layers video I listed all the challenges we have to address, and then described how you could group them in meaningful modules (called networking layers).
In the introductory videos of How Networks Really Work webinar I described the mandatory elements of any networking solution and additional challenges you have to solve when you can’t pull a cable between the adjacent nodes.
It’s time for the next bit of complexity: what if we have more than two nodes connected to the same network segment? Welcome to the world of multi-access networks and data link control.
After discussing the challenges one encounters even in the simplest networking scenario connecting two computers with a cable we took a short diversion into an interesting complication: what if the two computers are far apart and we can’t pull a cable between them?
Trying to answer that question we entered the wondrous world of transmission technologies. It’s a topic one can spent a whole life exploring and mastering, so we were not able to do more than cover the fundamentals of modulations and multiplexing technologies.
Whenever you’re discussing a complex topic it’s worth adhering to two principles: (A) identify the challenges you’re trying to solve and (B) start as simple as you can and add complexity later.
We did exactly that in the Introducing Networking Challenges part of How Networks Really Work webinar. We started with the simplest possible case of two computers connected with a cable… and even there identified a plethora of challenges that had to be solved more than half a century ago (and still have to be solved today no matter what magic software-defined technology someone pulls out of their wizard hat).