Immediately after VXLAN was announced @ VMworld, the twittersphere erupted in speculations and questions, many of them focusing on how VXLAN relates to OTV and LISP, and why we might need a new encapsulation method.
VXLAN, OTV and LISP are point solutions targeting different markets. VXLAN is an IaaS infrastructure solution, OTV is an enterprise L2 DCI solution and LISP is ... whatever you want it to be.
VXLAN tries to solve a very specific IaaS infrastructure problem: replace VLANs with something that might scale better. In a massive multi-tenant data center having thousands of customers, each one asking for multiple isolated IP subnets, you quickly run out of VLANs. VMware tried to solve the problem with MAC-in-MAC encapsulation (vCDNI), and you could potentially do the same with the right combination of EVB (802.1Qbg) and PBB (802.1ah), very clever tricks a-la Network Janitor, or even with MPLS.
Compared to all these, VXLAN has a very powerful advantage: it runs over IP. You don’t have to touch your existing well-designed L3 data center network to start offering IaaS services. The need for multipath bridging voodoo magic that a decent-sized vCDNI deployment would require is gone. VXLAN gives Cisco and VMware the ability to start offering reasonably-well-scaling IaaS cloud infrastructure. It also gives them something to compete against Open vSwitch/Nicira combo.
Reading the VXLAN draft, you might notice that all the control-plane aspects are solved with handwaving. Segment ID values just happen, IP multicast addresses are defined at the management layer and the hypervisors hosting the same VXLAN segment don’t even talk to each other, but rely on layer-2 mechanisms (flooding and dynamic MAC address learning) to establish inter-VM communication. VXLAN is obviously a QDS (Quick-and-Dirty-Solution) addressing a specific need – increasing the scalability of IaaS networking infrastructure.
VXLAN will indeed scale way better than VLAN-based solution, as it provides total separation between the virtualized segments and the physical network (no need to provision VLANs on the physical switches), it will scale somewhat better than MAC-in-MAC encapsulation because it relies on L3 transport (and can thus work well in existing networks), but it’s still a very far cry from Amazon EC2. People with extensive (bad) IP multicast experience are also questioning the wisdom of using IP multicast instead of source-based unicast replication ... but if you want to remain control-plane ignorant, you have to rely on third parties (read: IP multicast) to help you find your way around.
It seems there have already been claims that VXLAN solves inter-DC VM mobility (I sincerely hope I’ve got a wrong impression from Duncan Epping’s summary of Steve Herrod’s general session @ VMworld). If you’ve ever heard about traffic trombones, you should know better (but it does prove a point @etherealmind made recently). Regardless of the wishful thinking and beliefs in flat earth, holy grails and unicorn tears, a pure bridging solution (and VXLAN is no more than that) will never work well over long distances.
Here’s where OTV kicks in: if you do become tempted to implement long-distance bridging, OTV is the least horrendous option (BGP MPLS-based MAC VPN will be even better, but it still seems to be working primarily in PowerPoint). It replaces dynamic MAC address learning with deterministic routing-like behavior, provides proxy ARP services, and stops unicast flooding. Until we’re willing to change the fundamentals of transparent bridging, that’s almost as good as it gets.
As you can see, it makes no sense to compare OTV and VXLAN; it’s like comparing a racing car to a downhill mountain bike. Unfortunately, you can’t combine them to get the best of both worlds; at the moment, OTV and VXLAN live in two parallel universes. OTV provides long-distance bridging-like behavior for individual VLANs, and VXLAN cannot even be transformed into a VLAN.
LISP is yet another story. It provides very rudimentary approximation to IP address mobility across layer-3 subnets, and it might be able to do it better once everyone realizes hypervisor is the only place to do it properly. However, it’s a layer-3 solution running on top of layer-2 subnets, which means you might run LISP in combination with OTV (not sure it makes sense, but nonetheless) and you could be able to run LISP in combination with VXLAN once you can terminate VXLAN on a LISP-capable L3 device.
So, with the introduction of VXLAN, the networking world hasn’t changed a bit: the vendors are still serving us all isolated incompatible technologies ... and all we’re asking for is tightly integrated and well-architected designs.
Even more information
I’ll talk about data center fabric architectures and networking requirements for cloud computing at the upcoming EuroNOG conference.
You’ll find in-depth discussions of various data center and network virtualization technologies in my Data Center webinars: Data Center 3.0 for Networking Engineers (recording), Data Center Interconnects (recording) and VMware Networking Deep Dive (recording). If you're interested in all three webinars, check out the Data Center Trilogy.
All four webinars are available as part of the yearly subscription.