Building network automation solutions

9 module online course

Start now!

Category: EVPN

VMware NSX Killed My EVPN Fabric

A while ago I had an interesting discussion with someone running VMware NSX on top of VXLAN+EVPN fabric - a pretty common scenario considering:

  • NSX’s insistence on having all VXLAN uplink from the same server in the same subnet;
  • Data center switching vendors being on a lemming-like run praising EVPN+VXLAN;
  • Non-FANG environments being somewhat reluctant to connect a server to a single switch.

His fabric was running well… apart from the weird times when someone started tons of new VMs.

read more see 2 comments

Don’t Sugarcoat the Challenges You Have

Last year I got into somewhat-heated discussion with a few engineers who followed the advice to run IBGP EVPN address family on top of an EBGP underlay.

My main argument was simple: this is not how BGP was designed and how it’s commonly used, and twisting it this way requires schizophrenic BGP routing process which introduces unnecessary complexity (even though it looks simple in Junos configuration) and might confuse people who have to run the network after the brilliant designer is gone.

read more add comment

Private VLANs with VXLAN

Got this remark from a reader after he read the VXLAN and Q-in-Q blog post:

Another area where there is a feature gap with EVPN VXLAN is Private VLANs with VXLAN. They’re not supported on either Nexus or Juniper switches.

I have one word on using private VLANs in 2019: Don’t. They are messy and hard to maintain (not to mention it gets really interesting when you’re combining virtual and physical switches).

read more see 6 comments

Q-in-Q Support in Multi-Site EVPN

One of my subscribers sent me a question along these lines (heavily abridged):

My customer is running a colocation business, and has to provide L2 connectivity between racks, sometimes even across multiple data centers. They were using Q-in-Q to deliver that in a traditional fabric, and would like to replace that with multi-site EVPN fabric with ~100 ToR switches in each data center. However, Cisco doesn’t support Q-in-Q with multi-site EVPN. Any ideas?

As Lukas Krattiger explained in his part of Multi-Site Leaf-and-Spine Fabrics section of Leaf-and-Spine Fabric Architectures webinar, multi-site EVPN (VXLAN-to-VXLAN bridging) is hard. Don’t expect miracles like Q-in-Q over VNI any time soon ;)

read more see 4 comments

Interview: Active-Active Data Centers with VXLAN and EVPN

Christoph Jaggi asked me a few questions about using VXLAN with EVPN to build data center fabrics and data center interconnects (including active/active data centers). The German version was published on Inside-IT, here’s the English version.

He started with an obvious one:

What is an active-active data center and why would I want to use an active-active data center?

Numerous organizations have multiple data centers for load sharing or disaster recovery purposes. They could use one of their data centers and have the other(s) as warm or cold standby (active/backup setup) or use all data centers at the same time (active/active).

read more see 3 comments

Using VXLAN and EVPN to Build Active-Active Data Centers

Some (anti)patterns of network industry are way too predictable: every time there’s a new technology marketers start promoting it as the solution for every problem ever imagined. VXLAN was quickly touted as the solution for long-distance vMotion, and now everyone is telling you how to use VXLAN with EVPN to stretch VLANs across multiple data centers.

Does that make sense? It might… based on your requirements and features available on the devices you use to implement the VXLAN/EVPN fabric. We’ll cover the details in a day-long workshop in Zurich (Switzerland) on December 5th. There are still a few places left, register here.

see 11 comments

VXLAN and EVPN on Hypervisor Hosts

One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:

I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.

Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.

read more see 5 comments

Implications of Valley-Free Routing in Data Center Fabrics

As I explained in a previous blog post, most leaf-and-spine best-practices (as in: what to do if you have no clue) use BGP as the IGP routing protocol (regardless of whether it’s needed) with the same AS number shared across all spine switches to implement valley-free routing.

This design has an interesting consequence: when a link between a leaf and a spine switch fails, they can no longer communicate.

read more see 14 comments

VXLAN Broadcast Domain Size Limitations

One of the attendees of my Building Next-Generation Data Center online course tried to figure out whether you can build larger broadcast domains with VXLAN than you could with VLANs. Here’s what he sent me:

I'm trying to understand differences or similarities between VLAN and VXLAN technologies in a view of (*cast) domain limitation.

There’s no difference between the two on the client-facing side. VXLAN is just an encapsulation technology and doesn’t change how bridging works at all (read also part 2 of that story).

read more see 3 comments

Typical EVPN BGP Routing Designs

As discussed in a previous blog post, IETF designed EVPN to be next-generation BGP-based VPN technology providing scalable layer-2 and layer-3 VPN functionality. EVPN was initially designed to be used with MPLS data plane and was later extended to use numerous data plane encapsulations, VXLAN being the most common one.

Design Requirements

Like any other BGP-based solution, EVPN uses BGP to transport endpoint reachability information (customer MAC and IP addresses and prefixes, flooding trees, and multi-attached segments), and relies on an underlying routing protocol to provide BGP next-hop reachability information.

read more see 9 comments

Dissecting IBGP+EBGP Junos Configuration

Networking engineers familiar with Junos love to tell me how easy it is to configure and operate IBGP EVPN overlay on top of EBGP IP underlay. Krzysztof Szarkowicz was kind enough to send me the (probably) simplest possible configuration (here’s another one by Alexander Grigorenko)

To learn more about EVPN technology and its use in data center fabrics, watch the EVPN Technical Deep Dive webinar.

read more see 17 comments

What Is EVPN?

EVPN might be the next big thing in networking… or at least all the major networking vendors think so. It’s also a pretty complex technology still facing some interoperability challenges (I love to call it SIP of networking).

To make matters worse, EVPN can easily get even more confusing if you follow some convoluted designs propagated on the ‘net… and the best antidote to that is to invest time into understanding the fundamentals, and to slowly work through more complex scenarios after mastering the basics.

read more see 5 comments

Using 4-Byte BGP AS Numbers with EVPN on Junos

After documenting the basic challenges of using EBGP and 4-byte AS numbers with EVPN automatic route targets, I asked my friends working for various vendors how their implementation solves these challenges. This is what Krzysztof Szarkowicz sent me on specifics of Junos implementation:

To learn more about EVPN technology and its use in data center fabrics, watch the EVPN Technical Deep Dive webinar.

read more see 15 comments
Sidebar