Category: VXLAN
EVPN Designs: EVPN EBGP over IPv4 EBGP
In the previous blog posts, we explored three fundamental EVPN designs: we don’t need EVPN, IBGP EVPN AF over IGP-advertised loopbacks (the way EVPN was designed to be used) and EBGP-only EVPN (running the EVPN AF in parallel with the IPv4 AF).
Now we’re entering Wonderland: the somewhat unusual1 things vendors do to make their existing stuff work while also pretending to look cool2. We’ll start with EBGP-over-EBGP, and to understand why someone would want to do something like that, we have to go back to the basics.
… updated on Thursday, October 10, 2024 18:04 +0200
EVPN Designs: EBGP Everywhere
In the previous blog posts, we explored the simplest possible IBGP-based EVPN design and made it scalable with BGP route reflectors.
Now, imagine someone persuaded you that EBGP is better than any IGP (OSPF or IS-IS) when building a data center fabric. You’re running EBGP sessions between the leaf- and the spine switches and exchanging IPv4 and IPv6 prefixes over those EBGP sessions. Can you use the same EBGP sessions for EVPN?
TL&DR: It depends™.
EVPN Designs: Scaling IBGP with Route Reflectors
In the previous blog posts, we explored the simplest possible IBGP-based EVPN design and tried to figure out whether BGP route reflectors do more harm than good. Ignoring that tiny detail for the moment, let’s see how we could add route reflectors to our leaf-and-spine fabric.
As before, this is the fabric we’re working with:
Response: The Usability of VXLAN
Wes made an interesting comment to the Migrating a Data Center Fabric to VXLAN blog post:
The benefit of VXLAN is mostly scalability, so if your enterprise network is not scaling… just don’t. The migration path from VLANs is to just keep using VLANs. The (vendor-driven) networking industry has a huge blind spot about this.
Paraphrasing the famous Dinesh Dutt’s Autocon1 remark: I couldn’t disagree with you more.
Building Layer-3-Only EVPN Lab
A few weeks ago, Roman Dodin mentioned layer-3-only EVPNs: a layer-3 VPN design with no stretched VLANs in which EVPN is used to transport VRF IP prefixes.
The reality is a bit muddier (in the VXLAN world) as we still need transit VLANs and router MAC addresses; the best way to explore what’s going on behind the scenes is to build a simple lab.
Migrating a Data Center Fabric to VXLAN
Darko Petrovic made an excellent remark on one of my LinkedIn posts:
The majority of the networks running now in the Enterprise are on traditional VLANs, and the migration paths are limited. Really limited. How will a business transition from traditional to whatever is next?
The only sane choice I found so far in the data center environment (and I know it has been embraced by many organizations facing that conundrum) is to build a parallel fabric (preferably when the organization is doing a server refresh) and connect the new fabric with the old one with a layer-3 link (in the ideal world) or an MLAG link bundle.
BGP, EVPN, VXLAN, or SRv6?
Daniel Dib asked an interesting question on LinkedIn when considering an RT5-only EVPN design:
I’m curious what EVPN provides if all you need is L3. For example, you could run pure L3 BGP fabric if you don’t need VRFs or a limited amount of them. If many VRFs are needed, there is MPLS/VPN, SR-MPLS, and SRv6.
I received a similar question numerous times in my previous life as a consultant. It’s usually caused by vendor marketing polluting PowerPoint slide decks with acronyms without explaining the fundamentals1. Let’s fix that.
EVPN Designs: IBGP Full Mesh Between Leaf Switches
In the previous blog post in the EVPN Designs series, we explored the simplest possible VXLAN-based fabric design: static ingress replication without any L2VPN control plane. This time, we’ll add the simplest possible EVPN control plane: a full mesh of IBGP sessions between the leaf switches.
EVPN Designs: VXLAN Leaf-and-Spine Fabric
In this series of blog posts, we’ll explore numerous routing protocol designs that can be used to implement EVPN-with-VXLAN L2VPNs in a leaf-and-spine data center fabric. Every design will come with a companion netlab topology you can use to create a lab and explore the behavior of leaf- and spine switches.
Our leaf-and-spine fabric will have four leaves and two spines (but feel free to adjust the lab topology fabric parameters to build larger fabrics). The fabric will provide layer-2 connectivity to orange and blue VLANs. Two hosts will be connected to each VLAN to check end-to-end connectivity.
… updated on Sunday, June 30, 2024 10:43 UTC
VXLAN Virtual Labs Have Never Been Easier
I stumbled upon an “I want to dive deep into VXLAN and plan to build a virtual lab” discussion on LinkedIn1. Of course, I suggested using netlab. After all, you have to build an IP core and VLAN access networks and connect a few clients to those access networks before you can start playing with VXLAN, and those things tend to be excruciatingly dull.
Now imagine you decide to use netlab. Out of the box, you get topology management, lab orchestration, IPAM, routing protocol design (OSPF, BGP, and IS-IS), and device configurations, including IP routing and VLANs.
VXLAN/EVPN Layer-3 Handoff (L3Out) on Arista EOS
A while ago, I published a blog post describing how to establish a LAN/WAN L3 boundary in VXLAN/EVPN networks using Cisco NX-OS. At that time, I promised similar information for Arista EOS. Here it is, coming straight from Massimo Magnani. The useful part of what follows is his; all errors were introduced during my editing process.
In the cases I have dealt with so far, implementing the LAN-WAN boundary has the main benefit of limiting the churn blast radius to the local domain, trying to impact the remote ones as little as possible. To achieve that, we decided to go for a hierarchical solution where you create two domains, local (default) and remote, and maintain them as separate as possible.
Worth Reading: Introduction of EVPN at DE-CIX
Numerous Internet Exchange Points (IXP) started using VXLAN years ago to replace tradition layer-2 fabrics with routed networks. Many of them tried to avoid the complexities of EVPN and used VXLAN with statically-configured (and hopefully automated) ingress replication.
A few went a step further and decided to deploy EVPN, primarily to deploy Proxy ARP functionality on EVPN switches and reduce the ARP/ND traffic. Thomas King from DE-CIX described their experience on APNIC blog – well worth reading if you’re interested in layer-2 fabrics.
Does EVPN/VXLAN over SD-WAN Make Sense?
It looks like we might be seeing VXLAN-over-SDWAN deployments in the wild. Here’s the “why that makes sense” argument I received from a participant of the ipSpace.net Design Clinic in which I wasn’t exactly enthusiastic about the idea.
Also, the EVPN-over-WAN idea is not hypothetical since EVPN+VXLAN is now the easiest way to build L3VPN with data center switches that don’t support MPLS LDP. Folks with no interest in EVPN’s L2 features are still using it for L3VPN.
Let’s unravel this scenario a bit:
Layer-3 WAN Handoff (L3Out) in VXLAN/EVPN Fabrics
I got a question from a few of my students regarding the best way to implement end-to-end EVPN across multiple locations. Obviously there’s the multi-pod and multi-site architecture for people believing in the magic powers of stretching VLANs across the globe, but I was looking for something that I could recommend to people who understand that you have to have a L3 boundary if you want to have multiple independent failure domains (or availability zones).
MLAG Clusters without a Physical Peer Link
With the widespread deployment of Ethernet-over-something technologies, it became possible to build MLAG clusters without a physical peer link, replacing it with a virtual link across the core fabric. Avaya was one of the first vendors to implement virtual peer links with Provider Backbone Bridging (PBB) transport, and some data center switching vendors (example: Cisco) offer similar functionality with VXLAN transport.