Can VMware NSX and Cisco ACI Interoperate over VXLAN?

I got a long list of VXLAN-related questions from one of my subscribers. It started with an easy one:

Does Cisco ACI use VXLAN inside the fabric or is something else used instead of VXLAN?

ACI uses VXLAN but not in a way that would be (AFAIK) interoperable with any non-Cisco product. While they do use some proprietary tagging bits, the real challenge is the control plane.

In APIC release 2.0 you can run EVPN (a standard control plane) from an ACI fabric to a Nexus 7000 or ASR 1000 router.

NSX does use VXLAN but has features beyond ACI such as load balancers and firewalls.

Firewalls - yes, including distributed ones. Load balancers - yes, Edge Services Router can also work as a load balancer. Distributed load balancers? Works best in PPT; it's an awfully hard task to solve.

I understand VXLAN is a standards based protocol and NSX and ACI are vendor terms but how do they inter-relate?

VXLAN is just a data-plane protocol (like IP or Ethernet). The problem is the control plane - how do you figure out where everyone is (think OSPF or BGP). I think the intersection of NSX and ACI control plane protocols is still zero.

I heard recently in a VMware session saying you could use NSX with ACI and I scratched my head and thought why you would do this, what would this provide over the added support burden?

Sure you can. You build an ACI fabric to provide IP transport and run NSX on top of it ;) A bit expensive if you ask me ;))

However, I know production deployments running VMware NSX on top of Nexus 9000 switches using EVPN control plane. Mitja Robas talked about his experiences with one of those deployments in the last session of Autumn 2016 Building Next-Generation Data Centers online course (and of course you’ll get access to the recording of his presentation if you register for the Spring 2017 session).

5 comments:

  1. There's really two types of VXLAN used by ACI: One is the internal VXLAN (iVXLAN or eVXLAN, I'm not sure what it's called today). That's the proprietary VXLAN that uses extra bits in the headers for things like source group (which saves a ton of TCAM space).

    Just like HiGig2 or other internal switching encaps, it doesn't leave the ACI fabric. Whether it's VXLAN or VLAN (or untagged), the packets get decapsulated and re-encapulsated into that iVXLAN header as it bounces around inside ACI. That header is removed by the time it leaves the ACI fabric. Each bridge domain is its own iVXLAN segment. Cisco refers to this as normalization, so that different encaps can be used on the same network.

    MP-BGP (e)VPN is used for the new multi-pod topologies, and as far as I know right now it's only ACI-specific.

    Then there's IETF standard VXLAN encap, which ACI can also do. There's some integration with vShield and OVS (as well as AVS) with either IEEE VLAN tagged frames or IETF standard VXLAN tagged packets, but I don't think there's currently any specific integration going on with NSX. In terms of ACI and NSX, I believe ACI would be an IP underlay.
  2. VXLAN (NSX) over eVXLAN (ACI) seems to be a uneccessary complication but on the other hand both can play a different roles. ACI can provide a secured/automated fabric and NSX end-to-end network services available in multiple locations including a public cloud. So there are examples where EVPN or TRILL is a transport and on top of that NSX is running. It is similar to CsC services with MPLS-TE FRR or Segment Routing in the core. Of course VXLANoVXLAN imposes a bigger overhead but this is just a number of bytes. From the ASIC perspective incoming VXLAN is an IP/UDP packet so no performance impact apart from processing and serialization of additional 50 bytes per packet.

    A different story is with VXLAN in NX-OS standalone mode. Here VXLAN on Cisco can interoperate with NSX directly composing a single overlay network.

    I thought also about a use case of running ACI over NSX. It could an encryption which is not a part of ACI and is included as L2VPN encrypted service in NSX. ;)
  3. Why would you do this? I cannot think of a fiscal or technical driver for this. Plain vanilla VCenter, sure, integrate with ACI. I believe you can automatically sync with DVS so that you don't have to manually configure VLAN/port/EPG info every time you add a server. But if you've already spent the substantial money to implement NSX, why are you also doing ACI? The whole point of going with NSX is to virtualize the network layer and abstract it from the hardware. You can use whitebox switches... they are literally just moving encapsulated packets, there's no need for advanced switching feature sets of cadillac vendors. And from what I've seen, the VXLAN configuration is pretty light... some information provided during the set up and build phase, not much interaction once it's rolled out.

    From the ACI side, you can do the network layer with plain VMWare covering the server side of things. I think you can even do bare metal deployments and layer-2 integration with Cisco's AVS... not 100% certain yet. But I don't think you need the advanced features of NSX on top of this. And like NSX, the VXLAN/EVPN stuff happens in the background. You do some early configuration and provide some info, then it all runs in the background like magic.

    I don't know if you can (or should) be trying to merge the VXLAN fabric of either vendor with anything else, even something that uses RFC-standard VXLAN. I think that would be opening a can of worms. I suspect either platform would not gracefully accept VXLAN move/add/change coming from any external platforms, but I could be wrong.

    Again, I really can't see what technical driver would ever lead to spending the money for both ACI and NSX. Even if you had two separate existing environments and suddenly had a business driver force you down the path of merging them, I think you would want to select which vendor is going to handle what part then strip out any overlapping features (if possible) so that your recurring maintenance and license renewals wouldn't be obscene.
    Replies
    1. Google "ACI NSX Bacon"

      In terms of cost, ACI is just 5-10% more than the equivalent Nexus 9K EVPN fabric, and only 20-30% more than the equivalent VPC/STP design. That's a wash, easily justified by ACI's single point-of-management alone! Plenty of ACI customers buy it expecting nothing more than a really good L2/L3 fabric.

      When you run NSX on ACI, ACI is purely an underlay. But it's a damn good one -- (a) discrete fabric-wide security, QoS, and health stats for NSX VXLAN, ESXi vMotion, ESXi storage, and VMware management traffic; (b) BGP/OSPF peering with the NSX DLR for fully distributed/anycast L3 ACI-to-NSX routing (no VPC/dynamic routing goofiness); (c) MAC/IP routing with sub-millisecond (!) fault recovery across hundreds of leaves; and so on...

      In contrast to ACI's chump change, NSX doubles your per-VM cost (based on several real-world scenarios I've done with and without ELA's). As a result, many NSX customers are limiting their deployments to narrow high-security areas, such as PCI, HIPAA, or IP.
  4. The problem is still that NSX does not support EVPN - EVPN support would make this easy
Add comment
Sidebar