After (hopefully) agreeing on what routing, bridging, and switching are, let’s focus on the first important topic in this area: how do we get a packet across the network? Yet again, there are three fundamentally different technologies:
- Source node knows the full path (source routing)
- Source node opened a path (virtual circuit) to the destination node and uses that path to send traffic
- The network performs hop-by-hop destination-address-based packet forwarding.
More details in the Getting Packets Across the Network video.
One of my readers is designing a layer-2-only data center fabric (no SVI interfaces on switches) with stringent security requirements using Cisco Nexus switches, and he wondered whether a host connected to such a fabric could attack a switch, and whether it would be possible to reach the management network in that way.
Do you think it’s possible to reach the MANAGEMENT PLANE from the DATA PLANE? Is it valid to think that there is a potential attack vector that someone can compromise to source traffic from the front of the device (ASIC) through the PCI bus across the CPU to the across the PCI bus to the Platform Controller Hub through the I/O card to spew out the Management Port onto that out-of-band network?
My initial answer was “of course there’s always a conduit from the switching ASIC to the CPU, how would you handle STP/CDP/LLDP otherwise”. I also asked Lukas Krattiger for more details; here’s what he sent me:
Justin Pietsch is back with another must-read article, this time focused on high-speed Ethernet switching ASICs. I’ve rarely seen so many adjacent topics covered in a single easy-to-read article.
Got this interesting question from one of my readers
Based on my experience, the documentation regarding Linux networking is either elementary man pages for user-space utilities or very complicated Linux kernel source code. Does getting deep into Linux networking mean reading source code?
It all depends on how deep you plan to go:
If you’re working solely with IP-based networks, you’re probably quick to assume that hop-by-hop destination-only forwarding is the only packet forwarding paradigm that makes sense. Not true, even today’s networks use a variety of forwarding mechanisms, most of them called some variant of routing or switching.
What exactly is the difference between the two, and what is bridging? I’m answering these questions (and a few others like what’s the difference between data-, control- and management planes) in the Bridging, Routing and Switching Terminology video.
CPU is not designed for the purpose of packet forwarding. One example is packet order retaining. It is impossible for a multicore CPU to retain the packet order as is received after parallel processing by multiple cores. Another example is scheduling. Yes CPU can do scheduling, but at a very high tax of CPU cycles.
We did a number of Software Gone Wild podcasts trying to figure out whether smart NICs address a real need or whether it’s just another vendor attempt to explore all potential markets. As expected, we got opposing views from Luke Gorrie claiming a NIC should be as simple as possible to Silvano Gai explaining how dedicated hardware performs the same operations at lower cost, lower power consumption and way higher speeds.
In theory, there’s no doubt that Silvano is right. Just look at how expensive some router line cards are, and try to figure out how much it would cost to get 25.6 Tbps of forwarding performance that we’ll get in a single ASIC (Tomahawk-4) in software (assuming ~10 Gbps per CPU core). High-speed core packet forwarding has to be done in dedicated hardware.
Greg Ferro is back with some great technical content, this time explaining why hardware-based packet capture might return unexpected results.
Here’s one of the weirdest ideas I’ve found recently: patch together two dangling ends of virtual Ethernet cables with PBR.
To be fair, Jon Langemak used that example to demonstrate how powerful tc could be. It’s always fun to see a totally-unexpected aspect of Linux networking… even though it looks like the creators of those tools believed in Perl mentality of creating a gazillion variants of line noise to get the job done.
Pete Lumbis started his Cumulus Linux 4.0 update with an overview of differences between Cumulus Linux on hardware switches and Cumulus VX, and continued with an in-depth list of ASIC families supported by Cumulus Linux.
You can watch his presentation, as well as the more in-depth overview of Cumulus Linux concepts by Dinesh Dutt, in the recently-updated What Is Cumulus Linux All About video.
A few weeks ago I described the basics of AWS networking, now it’s time to describe how different Azure is.
As always, it would be best to watch my Azure Networking webinar to get the details. This blog post is the abridged CliffsNotes version of the webinar (and here’s the reason I won’t write a similar blog post for other public clouds ;).
A while ago we discussed a software-focused view of Network Interface Cards (NICs) with Luke Gorrie, and a hardware-focused view of them with Or Gerlitz (Mellanox), Andy Gospodarek (Broadcom) and Jiri Pirko (Mellanox).
Why would anyone want to implement features in hardware and not in software, and what would be the best hardware implementation? We discussed these dilemmas with Silvano Gai in Episode 110 of Software Gone Wild podcast.
There was an obvious invisible elephant in the virtual Cloud Field Day 7 (CFD7v) event I attended in late April 2020. Most everyone was talking about AWS, how their stuff runs on AWS, how it integrates with AWS, or how it will help others leapfrog AWS (yeah, sure…).
Although you REALLY SHOULD watch my AWS Networking webinar (or something equivalent) to understand what problems vendors like VMWare or Pensando are facing or solving, I’m pretty sure a lot of people think they can get away with CliffsNotes version of it, so here they are ;)
In my quest to understand how much buffer space we really need in high-speed switches I encountered an interesting phenomenon: we no longer have the gut feeling of what makes sense, sometimes going as far as assuming that 16 MB (or 32MB) of buffer space per 10GE/25GE data center ToR switch is another $vendor shenanigan focused on cutting cost. Time for another set of Fermi estimates.
Let’s take a recent data center switch using Trident II+ chipset and having 16 MB of buffer space (source: awesome packet buffers page by Jim Warner). Most of switches using this chipset have 48 10GE ports and 4-6 uplinks (40GE or 100GE).
You haven’t mentioned Intel's Omni-Path at all. Should I be surprised?
While Omni-Path looks like a cool technology (at least at the whitepaper level), nobody ever mentioned it (or Intel) in any data center switching discussion I was involved in.