Category: IP routing
After a brief overview of FRRouting suite Donald Sharp continued with a deep dive into FRR architecture, including the various routing daemons, role of Zebra and ZAPI, interface between RIB (Zebra) and FIB (Linux Kernel), sample data flow for route installation, and multi-threading in Zebra and BGP daemons.
In mid 2000s I wrote a number of articles describing various TCP/IP features. Most of them are a bit outdated, so I decided to clean up, update and repost the most interesting ones on ipSpace.net, starting with Never-Ending Story of IP Fragmentation.
In mid-June I started another pet project - a series of webinars focused on networking fundamentals. In the first live session on June 18th we focused on identifying the challenges one has to solve when building an end-to-end networking solution, and the role of layered approach to networking.
Not surprisingly, we quickly went down the rabbit holes of computer networking history, including SCSI cables, serial connections and modems… but that’s where it all started, and some of the concepts developed at that time are still used today… oftentimes heavily morphed by recursive application of RFC 1925 Rule 11.
A few years ago we “celebrated” 512K day - the size of the full Internet routing table exceeded 512K (for whatever value of K ;) prefixes, overflowing TCAMs in some IP routers and resulting in interesting brownouts.
We’re close to exceeding 768K mark and the beware 768K day blog posts have already started appearing. While you (RFC 2119) SHOULD check the size of your forwarding table and the maximum capabilities of your hardware, the more important question should be “Why do I need 768K forwarding entries if I’m not a Tier-1 provider”
Beautiful is better than ugly.
Simple is better than complex.
Complex is better than complicated.
So just because you can, don't.
Layer 2 Fabrics can't be extended beyond 2 Spine switches. I had a long argument with a $vendor guys on this. They don't even count SPB as Layer 2 fabric and so forth.
The root cause of this myth is the lack of understanding of what layer-2, layer-3, bridging and routing means. You might want to revisit a few of my very old blog posts before moving on: part 1, part 2, what is switching, layer-3 switches and routers.
As anyone starting their journey into AWS quickly discovers, cloud is different (or as I wrote in the description of my AWS workshop you feel like Alice in Wonderland). One of the gotchas: when you link multiple routing domains (Virtual Private Clouds – the other VPC) you have to create static routing table entries on both ends. Even worse, there’s no transit VPC – you have to build a full mesh of relationships.
The correct solution to this challenge is automation:
Here’s a question I got from an attendee of my Building Next-Generation Data Center online course:
As far as I understood […] it is obsolete nowadays to build a new DC fabric with routing on the host using BGP, the proper way to go is to use IGP + SDN overlay. Is my understanding correct?
Ignoring for the moment the fact that nothing is ever obsolete in IT, the right answer is it depends… this time on answer(s) to two seemingly simple questions “what services are we offering?” and “what connectivity problem are we trying to solve?”.
Most blog posts generate the usual noise from the anonymous peanut gallery (if only they’d have at least a sliver of Statler and Waldorf in them), but every now and then there’s a comment that’s pure gold. The one made by Tony Przygienda (of RIFT fame) on Valley-Free Routing post is so good and relevant that I decided to republish it as a separate blog post. Enjoy!
Reading academic articles about Internet-wide routing challenges you might stumble upon valley-free routing – a pretty important concept with applications in WAN and data center routing design.
If you’re interested in the academic discussions, you’ll find a pretty exhaustive list of papers on this topic in the Informative References section of RFC 7908; here’s the over-simplified version.
Using EBGP instead of an IGP (OSPF or IS-IS) in leaf-and-spine data center fabrics is becoming a best practice (read: thing to do when you have no clue what you’re doing).
The usual argument defending this design choice is “BGP scales better than OSPF or IS-IS”. That’s usually true (see also: Internet), and so far, EBGP is the only reasonable choice in very large leaf-and-spine fabrics… but does it really scale better than a link-state IGP in smaller fabrics?
Our good friend mr. Anonymous has too many buzzwords and opinions in his repertoire, at least based on this comment he left on my Using 4-byte AS Numbers with EVPN blog post:
But IGPs don't scale well (as you might have heard) except for RIFT and Openfabric. The others are trying to do ECMP based on BGP.
Should you be worried about OSPF or IS-IS scalability when building your data center fabric? Short answer: most probably not. Before diving into a lengthy explanation let's give our dear friend some homework.
I was listening to very interesting Future of Networking with Fred Baker a long while ago and enjoyed Fred’s perspectives and historical insight, until unfortunately Greg Ferro couldn’t possibly resist the usual bashing of traditional routing protocols and praising of intent-based (or flow-based or SDN or…) whatever.
Here’s what I understood he said around 35:17
One of my friends sent me this question:
Do you remember if VLANs came first or was it VRFs?
I remember VLANs using ISL (pre-802.1q encapsulation) on early Cisco Ethernet switches (mid 90s), the earliest reference I could track down on Wikipedia is from 1988.
As always, we started with the “what’s wrong with what we have right now, like using BGP as a better IGP” question, resulting in “BGP is becoming the trash can of the Internet”.