Category: VXLAN
VXLAN-to-VXLAN Bridging in DCI Environments
Almost exactly a decade ago I wrote that VXLAN isn’t a data center interconnect technology. That’s still true, but you can make it a bit better with EVPN – at the very minimum you’ll get an ARP proxy and anycast gateway. Even this combo does not address the other requirements I listed a decade ago, but maybe I’m too demanding and good enough works well enough.
However, there is one other bit that was missing from most VXLAN implementations: LAN-to-WAN VXLAN-to-VXLAN bridging. Sounds weird? Supposedly a picture is worth a thousand words, so here we go.
VXLAN-Focused Design Clinic in June 2022
ipSpace.net subscribers are probably already familiar with the Design Clinic: a monthly Zoom call in which we discuss real-life design- and technology challenges. I started it in September 2021 and it quickly became reasonably successful; we covered almost two dozen topics so far.
Most of the challenges contributed for the June 2022 session were focused on VXLAN use cases (quite fitting considering I just updated the VXLAN Technical Deep Dive webinar), including:
- Can we implement Data Center Interconnect (DCI) with VXLAN? (Yes, but…)
- Can we run VXLAN over SD-WAN (and does it make sense)? (Yes/No)
- What happened to traditional MPLS/VPN Enterprise core and can we use VXLAN/EVPN instead? (Still there/Maybe)
- Should we use routers or switches as data center WAN edge devices, and how do we integrate them with VXLAN/EVPN data center fabric? (Yes đ)
For more details, join us on June 6th. There’s just a minor gotcha: you have to be an active ipSpace.net subscriber to do it.
Overlay Virtual Networking Examples
One of ipSpace.net subscribers wanted to see a real-life examples in the Overlay Virtual Networking webinar:
I would be nice to have real world examples. The webinar lacks of contents about how to obtain a fully working L3 fabric overlay network, including gateways, vrfs, security zones, etc… I know there is not only one “design for all” but a few complete architectures from L2 to L7 will be appreciated over deep-dives about specific protocols or technologies.
Most ipSpace.net webinars are bits of a larger puzzle. In this particular case:
EVPN/VXLAN Complexity
We have school holidays this week, so I’m reposting wonderful comments that would otherwise be lost somewhere in the page margins. Today: Minh Ha on complexity of emulating layer-2 networks with VXLAN and EVPN.
Dmytro Shypovalov is a master networker who has a sophisticated grasp of some of the most advanced topics in networking. He doesn’t write often, but when he does, he writes exceptional content, both deep and broad. Have to say I agree with him 300% on “If an L2 network doesnât scale, design a proper L3 network. But if people want to step on rakes, why discourage them.”
Worth Reading: Switching to IP fabrics
Namex, an Italian IXP, decided to replace their existing peering fabric with a fully automated leaf-and-spine fabric using VXLAN and EVPN running on Cumulus Linux.
They documented the design, deployment process, and automation scripts they developed in an extensive blog post that’s well worth reading. Enjoy ;)
Why Would You Need VXLAN Transport?
It’s amazing how sometimes people fond of sharing their opinions and buzzwords on various social media can’t answer simple questions. Today’s blog post is based on a true story… a “senior network architect” fully engaged in a recent hype cycle couldn’t answer a simple question:
Why exactly would you need VXLAN and EVPN?
We could spend a day (or a week) discussing the nuances of that simple question, but all I have at the moment is a single web page, so here we go…
Should I Go with VXLAN or MLAG with STP?
TL&DR: It’s 2020, and VXLAN with EVPN is all the rage. Thank you, you can stop reading.
On a more serious note, I got this questions from an Johannes Spanier after he read my do we need complex data center switches for NSX underlay blog post:
Would you agree that for smaller NSX designs (~100 hypervisors) a much simpler Layer2 based access-distribution design with MLAGs is feasible? One would have two distribution switches and redundant access switches MLAGed together.
I would still prefer VXLAN for a number of reasons:
The Never-Ending "My Overlay Is Better Than Yours" Saga
I published a blog post describing how complex the underlay supporting VMware NSX still has to be (because someone keeps pretending a network is just a thick yellow cable), and the tweet announcing it admittedly looked like a clickbait.
[Blog] Do We Need Complex Data Center Switches for VMware NSX Underlay
Martin Casado quickly replied NO (probably before reading the whole article), starting a whole barrage of overlay-focused neteng-versus-devs fun.
Getting More Bang for Your VXLAN Bucks
A little while ago I explained why you canât use more than 4K VXLAN segments on a ToR switch (at least with most ASICs out there). Does that mean that youâre limited to a total of 4K virtual ethernet segments?
Of course not.
You could implement overlay virtual networks in software (on hypervisors or container hosts), although even there the enterprise products rarely give you more than a few thousand logical switches (to use NSX terminology)⌠but thatâs a product, not technology limitation. Large public cloud providers use the same (or similar) technology to run gazillions of tenant segments.
EVPN Route Targets, Route Distinguishers, and VXLAN Network IDs
Got this interesting question from one of my readers:
BGP EVPN message carries both VNI and RT. In importing the route, is it enough either to have VNI ID or RT to import to the respective VRF?. When importing routes in a VRF, which is considered first, RT or the VNI ID?
A bit of terminology first (which youâd be very familiar with if you ever had to study how MPLS/VPN works):
The EVPN Dilemma
Got an interesting set of questions from a networking engineer who got stuck with the infamous âletâs push the **** down the stackâ challenge:
So I am a rather green network engineer trying to solve the typical layer two stretch problem.
I could start the usual âfriends donât let friends stretch layer-2â or âyour business doesnât really need thatâ windmill fight, but letâs focus on how the vendors are trying to sell him the âperfectâ solution:
Upcoming Workshops: NSX, ACI, VXLAN, EVPN, DCI and More
Iâm running two workshops in Zurich in the next 10 days:
- Comparing VMware NSX and Cisco ACI (and how EVPN and VXLAN fit into the big picture) on Thursday, November 28th;
- Explaining how you could use VXLAN with EVPN to build infrastructure for active-active data centers on Tuesday, December 3rd.
I published the slide deck for the NSX versus ACI workshop a few days ago (and you can already download it if you have a paid ipSpace.net subscription) and itâs full of new goodness like ACI vPod, multi-pod ACI, multi-site ACI, ACI-on-AWS, and multi-site NSX-V and NSX-T.
Can We Really Use Millions of VXLAN Segments?
One of my readers sent me a question along these linesâŚ
VXLAN Network Identifier is 24 bit long, giving 16 us million separate segments. However, we have to map VNI into VLANs on most switches. How can we scale up to 16 million segments when we have run out of VLAN IDs? Can we create a separate VTEP on the same switch?
VXLAN is just an encapsulation format and does not imply any particular switch architecture. What really matters in this particular case is the implementation of the MAC forwarding table in switching ASIC.
VMware NSX-T and Geneve Q&A
A Network Artist left a lengthy comment on my Brief History of VMware NSX blog post. He raised a number of interesting topics, so I decided to write my replies as a separate blog post.
Using Geneve is an interesting choice to be made and while the approach has itâs own Pros and Cons, I would like to stick to VXLAN if I were to recommend to someone for few good reasons.
The main reason I see for NSX-T using Geneve instead of VXLAN is the need for additional header fields to carry metadata around, and to implement Network Services Header (NSH) for east-west service insertion.
Don't Base Your Design on Vendor Marketing
Remember how Arista promoted VXLAN coupled with deep buffer switches as the perfect DCI solution a few years ago? Someone took Arista’s marketing too literally, ran with the idea and combined VXLAN-based DCI with traditional MLAG+STP data center fabric.
While I love that they wrote a blog post documenting their experience (if only more people would do that), it doesn’t change the fact that the design contains the worst of both worlds.
Here are just a few things that went wrong: