Category: VXLAN
Interview: Active-Active Data Centers With VXLAN and EVPN
Christoph Jaggi asked me a few questions about using VXLAN with EVPN to build data center fabrics and interconnects (including active/active data centers). The German version was published on Inside-IT; here’s the English version.
He started with an obvious one:
What is an active-active data center, and why would I want to use it?
Numerous organizations have multiple data centers for load sharing or disaster recovery purposes. They could use one of their data centers and have the other(s) as warm or cold standby (active/backup setup) or use all data centers at the same time (active/active).
OMG, VXLAN Is Still Insecure
A friend of mine told me about a “VXLAN is insecure, the sky is falling” presentation from RIPE-77 which claims that you can (under certain circumstances) inject packets into VXLAN virtual networks from the Internet.
Welcome back, Captain Obvious. Anyone looking at the VXLAN packet could immediately figure out that there’s no security in VXLAN. I pointed that out several times in my blog posts and presentations, including Cloud Computing Networking (EuroNOG, September 2011) and NSX Architecture webinar (August 2013).
Sunset of VXLAN. Really?
A lightning talk at the recent RIPE77 conference focused on shortcomings of VXLAN and rise of Geneve.
So what are those perceived shortcomings?
No protocol identifier – a single VXLAN VNI cannot carry more than one payload type. Let me point out that MPLS has the same shortcoming, as does IPsec.
VXLAN and EVPN on Hypervisor Hosts
One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:
I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.
Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.
VXLAN Broadcast Domain Size Limitations
One of the attendees of my Building Next-Generation Data Center online course tried to figure out whether you can build larger broadcast domains with VXLAN than you could with VLANs. Here’s what he sent me:
I’m trying to understand differences or similarities between VLAN and VXLAN technologies in a view of (*cast) domain limitation.
There’s no difference between the two on the client-facing side. VXLAN is just an encapsulation technology and doesn’t change how bridging works at all (read also part 2 of that story).
VXLAN Limitations of Data Center Switches
One of my readers found a Culumus Networks article that explains why you can’t have more than a few hundred VXLAN-based VLAN segments on every port of 48-port Trident-2 data center switch. That article has unfortunately disappeared in the meantime, and even the Wayback Machine doesn’t have a copy.
Could We Build an IXP on Top of VXLAN Infrastructure?
Andy sent me this question:
I'm currently playing around with BGP & VXLANs and wondering: is there anything preventing from building a virtual IXP with VXLAN? This would be then a large layer 2 network - but why have nobody build this to now, or why do internet exchanges do not provide this?
There was at least one IXP that was running on top of VXLAN. I wanted to do a podcast about it with people who helped them build it in early 2015 but one of them got a gag order.
Are VXLAN-Based Large Layer-2 Domains Safer?
One of my readers was wondering about the stability and scalability of large layer-2 domains implemented with VXLAN. He wrote:
If common BUM traffic (e.g. ARP) is being handled/localized by the network (e.g. NSX or ACI), and if we are managing what traffic hosts can send with micro-segmentation style filtering blocking broadcast/multicast, are large layer-2 domains still a recipe for disaster?
There are three major (fundamental) problems with large L2 domains:
EVPN: All that Glitters Is Not Gold
Cumulus Linux 3.2 shipped with a rudimentary EVPN implementation and everyone got really excited, including smaller ASIC manufacturers that finally got a control plane for their hardware VTEP functionality.
However, while it’s nice to have EVPN support in Cumulus Linux, the claims of its benefits are sometimes greatly exaggerated.
VXLAN Ping and Traceroute
From the moment Cisco and VMware announced VXLAN some networking engineers complained that they'd lose visibility into the end-to-end path. It took a long while, but finally the troubleshooting tools started appearing in VXLAN environment: NVO3 working group defined Fault Managemnet framework for overlay networks and Cisco implemented at least parts of it in recent Nexus OS releases.
You'll find more details in Software Gone Wild Episode 69 recorded with Lukas Krattiger in November 2016 (you can also watch VXLAN Technical Deep Dive webinar to learn more about VXLAN).
Can VMware NSX and Cisco ACI Interoperate over VXLAN?
I got a long list of VXLAN-related questions from one of my subscribers. It started with an easy one:
Does Cisco ACI use VXLAN inside the fabric or is something else used instead of VXLAN?
ACI uses VXLAN but not in a way that would be (AFAIK) interoperable with any non-Cisco product. While they do use some proprietary tagging bits, the real challenge is the control plane.
Replacing FabricPath with VXLAN, EVPN or ACI?
One of my friends plans to replace existing FabricPath data center infrastructure, and asked whether it would make sense to stay with FabricPath (using the new Nexus 5600 switches) or migrate to ACI.
I proposed a third option: go with simple VXLAN encapsulation on Nexus 9000 switches. Here’s why:
Why Do We Need VXLAN (and What Is It)?
Do you need VXLAN in your data center or could you continue using traditional bridging? Do layer-2 fabrics make sense or are they a dead end in the evolution of virtual networking?
I tried to provide a few high-level answers in the Introduction to VXLAN video which starts the VXLAN Technical Deep Dive webinar. The public version of the video is now available on ipSpace.net Free Content web site.
… updated on Thursday, March 31, 2022 16:03 UTC
VXLAN Hardware Gateway Overview
One of my readers stumbled upon blog post from 2011 explaining the potential implementations of VXLAN hardware gateways, and asked me if that information is still relevant.
I knew that I’d included tons of information in the Data Center Fabrics and VXLAN Deep Dive webinars, but couldn’t find anything on the web, so I decided to fix that in 2015.
OMG, VXLAN Encapsulation Has No Security!
Every now and then someone actually looks at the VXLAN packet format and eventually figures out that VXLAN encapsulation doesn’t provide any intrinsic security.
TL&DR Summary: That’s old news, the sky is not falling, and deploying VXLAN won’t make your network less secure than traditional VLAN- or MPLS-based networks.