Category: BGP
Worth Reading: Performance Testing of Commercial BGP Stacks
For whatever reason, most IT vendors attach “you cannot use this for performance testing and/or publish any results” caveat to their licensing agreements, so it’s really hard to get any independent test results that are not vendor-sponsored and thus suitably biased.
Justin Pietsch managed to get a permission to publish test results of Junos container implementation (cRPD) – no surprise there, Junos outperformed all open-source implementations Justin tested in the past.
What about other commercial BGP stacks? Justin did the best he could: he published Testing Commercial BGP Stacks instructions, so you can do the measurements on your own.
Running BGP between Virtual Machines and Data Center Fabric
Got this question from one of my readers:
When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?
Short answer: you don’t.
Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.
Mixed Feelings about BGP Route Reflector Cluster ID
Here’s another BGP Route Reflector myth:
In a redundant design, you should use Route Reflector Cluster ID to avoid loops.
TL&DR: No.
While BGP route reflectors can cause permanent forwarding loops in sufficiently broken topologies, the Cluster ID was never needed to stop a routing update propagation loop:
BGP Route Reflector Myths
New networking myths are continuously popping up. Here’s a BGP one I encountered a few days ago:
You don’t need IBGP sessions between BGP route reflectors
In general, that’s clearly wrong, as illustrated by this setup:
Three Dimensions of BGP Address Family Nerd Knobs
Got into an interesting BGP discussion a few days ago, resulting in a wild chase through recent SRv6 and BGP drafts and RFCs. You might find the results mildly interesting ;)
BGP has three dimensions of address family configurability:
- Transport sessions. Most vendors implement BGP over TCP over IPv4 and IPv6. I’m sure there’s someone out there running BGP over CLNS1, and there are already drafts proposing running BGP over QUIC2.
- Address families enabled on individual transport sessions, more precisely a combination of Address Family Identifier (AFI) and Subsequent Address Family Identifier.
- Next hops address family for enabled address families.
Feedback: Recursive BGP Next Hop Resolution
The Recursive BGP Next Hops: an RFC 4271 Quirk blog post generated tons of feedback (thanks a million to everyone writing a comment on my blog or LinkedIn).
Starting with Robert Razsuk who managed to track down the original email that triggered the (maybe dubious) text in RFC 4271:
The text in section 5.1.3 was not really targeting to prohibit load balancing. Keep in mind that it is FIB layer which constructs actual forwarding paths.
The text has been suggested by Tom Petch in discussion about BGP advertising valid paths or even paths it actually installs in the RIB/FIB. The entire section 5.1.3 is about rules when advertising paths by BGP.
Recursive BGP Next Hops: an RFC 4271 Quirk
All BGP implementations I’ve seen so far use recursive next hop lookup:
- The next hop in the IP routing table is the BGP next hop advertised in the incoming update
- That next hop is resolved into the actual next hop using one or more recursive lookups into the IP routing table.
Furthermore, all BGP implementations I’ve seen used multiple recursive next hops (if available) to implement load balancing toward the BGP next hop – that’s how we made EBGP load balancing work in Stone Age of networking.
… updated on Monday, December 20, 2021 18:44 UTC
Highlights: Dynamic Negotiation of BGP Capabilities
The Dynamic Negotiation of BGP Capabilities blog post generated almost no comments, apart from the #facepalm realization that a certain network operating system resets IBGP sessions when the sole EBGP session goes down, but there were a few interesting comments on LinkedIn and Twitter.
While most engineers easily relate to the awkwardness of bringing down a BGP session to enable new functionality (Tearing down BGP session, as a solution reminds me rebooting a host, as a solution.), it’s not as easy as it looks. As Adam Chappell put it “Dynamic capability renegotiation does tend to sound a bit like changing the tyres while still moving. Very neat if you can pull it off but so much to go wrong…”
Podcast: Ironing Out the BGP Ruffles
After the (in)famous October 2021 Facebook outage, Corey Quinn invited me for another Screaming in the Cloud chat, this time focusing on what went wrong (hint: it wasn’t DNS or BGP).
We also touched on VAX/VMS history, how early CCIE lab exams worked, how BGP started, why there are only 13 root name servers (not really), and the transition from networking being pure magic to becoming a commodity. Hope you’ll enjoy our chat as much as I did.
Highlights: Multi-Threaded Routing Daemons
The multi-threaded routing daemons blog post generated numerous in-depth comments here and on LinkedIn. As always, thanks a million for keeping me honest and providing more details or additional perspectives. Here are some of the best bits.
Jeff Tantsura provided the first dose of reality:
All modern routing protocols implementations are multi-threaded, with a minimum separation of adjacency handling, route calculations and update generation. Note - writing multi-threaded code for complex tasks is a non trivial exercise (you could search for thread safety and similar artifacts and what happens when not implemented correctly). Moving to a multi-threaded code in early 2010s resulted in a multi-release (year) effort and 100s of related bugs all around.
Dr. Tony Przygienda added his hands-on experience (he’s been developing routing protocol software for ages):
… updated on Sunday, January 16, 2022 07:58 UTC
Building a BGP Anycast Lab
The Anycast Works Just Fine with MPLS/LDP blog post generated so much interest that I decided to check a few similar things, including running BGP-based anycast over a BGP-free core, and using BGP Labeled Unicast (BGP-LU).
The Big Picture
We’ll use the same physical topology we used in the OSPF+MPLS anycast example: a leaf-and-spine fabric (admittedly with a single spine) with three anycast servers advertising 10.42.42.42/32 attached to two of the leafs:
Optimal BGP Path Selection with BGP Additional Paths
A month ago I explained how using a BGP route reflector in a large-enough non-symmetrical network could result in suboptimal routing (or loss of path diversity or multipathing). I also promised to explain how Advertisement of Multiple Paths in BGP functionality1 solves that problem. Here we go…
I extended the original lab with another router to get a scenario where one route reflector (RR) client should use equal-cost paths to an external destination while another RR client should select a best path that is different from what the route reflector would select.
Dynamic Negotiation of BGP Capabilities
I wanted to write a blog post explaining the intricacies of Advertisement of Multiple Paths in BGP, got into a yak-shaving exercise when discussing the need to exchange BGP capabilities to enable this feature, and decided to turn it into a separate prerequisite blog post. The optimal path selection with BGP AddPath post is coming in a few days.
The Problem
Whenever you want to use BGP for something else than simple IPv4 unicast routing the BGP neighbors must agree on what they are willing to do – be it multiprotocol extensions and individual additional address families, graceful restart, route refresh… (IANA has the complete BGP Capability Codes registry).
… updated on Tuesday, February 15, 2022 15:42 UTC
Creating BGP Multipath Lab with netlab
I was editing the BGP Multipathing video in the Advanced Routing Protocols section of How Networks Really Work webinar, got to the diagram I used to explain the intricacies of IBGP multipathing and said to myself “that should be easy (and fun) to set up with netlab”.

Fifteen minutes later1 I had the lab up and running and could verify that BGP works exactly the way I explained it in the webinar.
… updated on Saturday, November 6, 2021 13:31 UTC
Why Does Internet Keep Breaking?
James Miles sent me a long list of really good questions along the lines of “why do we see so many Internet-related outages lately and is it due to BGP and DNS creaking of old age”. He started with:
Over the last few years there are more “high profile” incidents relating to Internet connectivity. I raise the question, why?
The most obvious reason: Internet became mission-critical infrastructure and well-publicized incidents attract eyeballs.
Ignoring the click baits, the underlying root cause is in many cases the race to the bottom. Large service providers brought that onto themselves when they thought they could undersell the early ISPs and compensate their losses with voice calls (only to discover that voice-over-Internet works too well).