This podcast introduction was written by Nick Buraglio, the host of today’s podcast.
As we all know, BGP runs the networked world. It is a protocol that has existed and operated in the vast expanse of the internet in one form or another since early 1990s, and despite the fact that it has been extended, enhanced, twisted, and warped into performing a myriad of tasks that one would never have imagined in the silver era of internetworking, it has remained largely unchanged in its operational core.
The world as we know it would never exist without BGP, and because of the fact that it is such a widely deployed protocol with such a solid track record of “just working”, the transition to a better security model surrounding it has been extraordinarily slow to modernize.
I got this question about the use of AS numbers on data center leaf switches participating in an MLAG cluster:
In the Leaf-and-Spine Fabric Architectures you made the recommendation to have the same AS number on all members of an MLAG cluster and run iBGP between them. In the Autonomous Systems and AS Numbers article you discuss the option of having different AS number per leaf. Which one should I use… and do I still need the EBGP peering between the leaf pair?
As always, there’s a bit of a gap between theory and practice ;), but let’s start with a leaf-and-spine fabric diagram illustrating both concepts:
Got mentioned in this tweet a while ago:
Watching @ApstraInc youtube stream regarding BGP in the DC with @doyleassoc and @jtantsura.Maybe BGP is getting bigger and bigger traction from big enterprise data centers but I still see an IGP being used frequently. I am eager to have @ioshints opinion on that hot subject.
Maybe I’ve missed some breaking news, but assuming I haven’t my opinion on that subject hasn’t changed.
One of the attendees of our Building Next-Generation Data Center online course submitted a picture-perfect solution to scalable layer-2 fabric design challenge:
- VXLAN/EVPN based data center fabric;
- IGP within the fabric;
- EBGP with the WAN edge routers because they’re run by a totally different team and they want to have a policy enforcement point between the two;
- EVPN over IBGP within the fabric;
- EVPN over EBGP between the fabric and WAN edge routers.
The only seemingly weird decision he made: he decided to run the EVPN EBGP session between loopback interfaces of core switches (used as BGP route reflectors) and WAN edge routers.
Two weeks ago I started with a seemingly simple question:
If a BGP speaker R is advertising a prefix A with next hop N, how does the network know that N is actually alive and can be used to reach A?
… and answered it for the case of directly-connected BGP neighbors (TL&DR: Hope for the best).
Jeff Tantsura provided an EVPN perspective, starting with “the common non-arguable logic is reachability != functionality”.
Now let’s see what happens when we add route reflectors to the mix. Here’s a simple scenario:
I’d like to bring back EVPN context. The discussion is more nuanced, the common non-arguable logic here - reachability != functionality.
One of our subscribers sent me this email when trying to use ideas from Ansible for Networking Engineers webinar to build BGP route reflector configuration:
I’m currently discovering Ansible/Jinja2 and trying to create BGP route reflector configuration from Jinja2 template using Ansible playbook. As part of group_vars YAML file, I wish to list all route reflector clients IP address. When I have 50+ neighbors, the YAML file gets quite unreadable and it’s hard to see data model anymore.
Whenever you hit a roadblock like this one, you should start with the bigger picture and maybe redefine the problem.
How does the network know that a VTEP is actually alive? (1) from the point of view of the control plane and (2) from the point of view of the data plane? And how do you ensure that control and data plane liveness monitoring has the same view? BFD for BGP is a possible solution for (1) but it’s not meant for 3rd party next hops, i.e. it doesn’t address (2).
Let’s stop right there (or you’ll stop reading in the next 10 milliseconds). I will also try to rephrase the question in more generic terms, hoping Aldrin won’t mind a slight detour… we’ll get back to the original question in another blog post.
As I explained in How Networks Really Work and Upcoming Internet Challenges webinars, routing security, and BGP security in particular remain one of the unsolved challenges we’ve been facing for decades (see also: what makes BGP a hot mess).
Fortunately, due to enormous efforts of a few persistent individuals BGP RPKI is getting traction (NTT just went all-in), and Flavio Luciani and Tiziano Tofoni decided to do their part creating an excellent in-depth document describing BGP RPKI theory and configuration on Cisco- and Juniper routers.
There are only two things you have to do:
- Read the document;
- Implement RPKI in your network.
Thank you, the Internet will be grateful.
After covering configuration and performance optimizations introduced in recent FRRouting releases, Donald Sharp focused on some of the recent usability enhancements, including BGP BestPath explanations, BGP Hostname, BGP Failed Neighbors, and improved debugging.
After my response to the BGP is a hot mess topic, Corey Quinn graciously invited me to discuss BGP issues on his podcast. It took us a long while to set it up, but we eventually got there… and the results were published last week. Hope you’ll enjoy our chat.
Aldrin wrote a well-thought-out comment to my EVPN Dilemma blog post explaining why he thinks it makes sense to use Juniper’s IBGP (EVPN) over EBGP (underlay) design. The only problem I have is that I forcefully disagree with many of his assumptions.
He started with an in-depth explanation of why EBGP over directly-connected interfaces makes little sense:
After a brief overview of FRRouting suite Donald Sharp continued with a deep dive into FRR architecture, including the various routing daemons, role of Zebra and ZAPI, interface between RIB (Zebra) and FIB (Linux Kernel), sample data flow for route installation, and multi-threading in Zebra and BGP daemons.