DJ Spry asked an interesting question in a comment to my MPLS/VPN in DCI designs post: “Why would one choose OTV over MPLS/VPN?” The answer is simple: it depends on what you need. MPLS/VPN provides path isolation between layer-3 domains (routed networks) across MPLS or IP infrastructure whereas OTV providers layer-2 transport (and VLAN-based path isolation) across IP infrastructure. However, it does make sense to compare OTV with VPLS (which was DJ Spry’s next question). Apart from the obvious platform dependence (OTV runs on Nexus 7000, VPLS runs on Catalyst 6500/Cisco 7600 and a few other routers) which might disappear once ASR1K gets the rumored OTV support, there’s a huge gap in functionality and complexity between the two layer-2 transport technologies.
VPLS (Virtual Private LAN Service) is a hodgepodge of kludges grown “organically” over the last decade. It all started with the MPLS/VPN revolution – service providers were replacing their existing L2 gear (ATM and Frame Relay switches) with routers, but still needed a life support mechanism for legacy cash cows. It didn’t take long for someone to invent pseudowires over MPLS, which supported Frame Relay and frame-mode ATM first, with cell-mode ATM and Ethernet eventually added to the mix. Not surprisingly, the technology is called AToM (Any Transport over MPLS).
A bit later, L3 vendors (Cisco and Juniper) got stuck in their MPLS/VPN blitzkrieg: their customers (traditional service providers) didn’t have the knowledge needed to roll out enterprise-grade L3 service (MPLS/VPN), so they wanted to keep it simple (like in the good old days) and provide L2 transport. Sure, why not ... it took just a few more kludges to provision a full-mesh of pseudowires, add dynamic MAC learning and later BGP-based PE-router autodiscovery and the brand-new VPLS technology was ready for business.
When the Data Center engineers wanted to implement L2 DC interconnects (a huge mistake if there ever was one), the VPLS technology was readily available and in another doing more with less epiphany a square peg was quickly hammered into a round hole.
To say that VPLS was less than a perfect fit for the L2 DCI needs would be an understatement of the year. VPLS never provided standard PE-router (DC edge device) redundancy (ICCP protocol is in its early stages, although it does seem to be working on ASR9K), so you had to heap a number of additional kludges on top of it (there is a whole book describing the kludges you have to use to get VPLS working in DCI scenarios) or merge two edge devices into a single logical device (with VSS and A-VPLS).
Furthermore, VPLS (at least Cisco’s implementation of it) relies on MPLS transport; if your DCI link has to use IP infrastructure, you have to configure MPLS over GRE tunnels before you can configure VPLS.
Last but definitely not least, as Cisco never supported point-to-multipoint LSP, the multicast and broadcast packets sent over VPLS (as well as unknown unicast floods) are replicated at the ingress device (inbound DC edge device) with multiple copies sent across DCI infrastructure.
OTV (Overlay Transport Virtualization) is a clean-slate L2 transport over IP design. It does not use tunnels (at least not conceptually, you could argue that the whole OTV cloud is a single multipoint tunnel) or intermediate shim technologies but encapsulates MAC frames directly into UDP datagrams. It does not rely on dynamic MAC address learning but uses IS-IS to propagate MAC reachability information. There is no unknown unicast flooding (bridging-abusing brokenware is supported with manual configuration) and L2 multicasts are turned into IP multicasts for optimal transport across DCI transport backbone (assuming the transport backbone can provide IP multicast services).
OTV (like other modern L2 technologies) also solves multihoming issues – it uses an Authoritative Edge Device approach very similar to TRILL’s appointed forwarder. There are additional goodies like ARP snooping and active-active forwarding (with VPC) ... and the icing on the cake is its beautifully simple configuration syntax (until, of course, large customers start asking for knobs solving their particular broken designs and a full-speed feature creep kicks in).
The only grudge I have with OTV at the moment is that its current implementation still feels like a proof-of-concept (I know OTV aficionados are jumping up and down at this point ;):
- Maximum number of devices in an OTV overlay is 6; 3 sites @ 2 devices each.
- It requires IP multicast in the transport IP core (if your IP transport infrastructure doesn’t provide IP multicast, you have to insert an extra layer of devices running IP MC over GRE tunnels); unicast mode is supposedly coming with NX-OS release 5.2.
- Nexus 7000 behaves like an IP host on the OTV side. It must use physical interfaces; the only redundancy you can get is a port channel (loopback interface support with routing protocol-based redundancy was promised for a future release).
The Data Center Interconnects webinar (register here) describes numerous L2 DCI interconnect technologies, including VPLS, A-VPLS, OTV, TRILL, BGP MPLS based MAC VPN (from Juniper) and EtherIP between load balancers (F5). You’ll also discover things you never wanted to know about L2 DCI caveats and challenges.
The Choose the Optimal VPN Service webinar describes numerous VPN services (including MPLS/VPN, pseudowires and VPLS) from customer’s perspective.