Unicast-Only VXLAN Finally Shipping

The long-promised unicast-only VXLAN has finally shipped with the Nexus 1000V release 4.2(1)SV2(2.1) (there must be some logic behind those numbers, but they all look like madness to me). The new Nexus 1000V release brings two significant VXLAN enhancements: unicast-only mode and MAC distribution mode.

Unicast-Only VXLAN

The initial VXLAN design and implementation took the traditional doing-more-with-less approach: VXLANs behave exactly like VLANs (including most of the scalability challenges VLANs have) and rely on third-party tool (IP multicast) to solve the hard problems (MAC address learning) that both Nicira and Microsoft solved with control-plane solutions.

Unicast-only VXLAN comes closer to what other overlay virtual networking vendors are doing: the VSM knows which VEMs have VMs attached to a particular VXLAN segment and distributes that information to all VEMs – each VEM receives a per-VXLAN list of destination IP addresses to use for flooding purposes.

Watch my VMware Networking Technical Deep Dive webinar for in-depth description of VSM and VEM.

MAC Distribution Mode

MAC distribution mode goes a step further: it eliminates the process of data-plane MAC address learning and replaces it with control-plane solution (similar to Nicira/VMware NVP) – VSM is collecting the list of MAC addresses and distributing the MAC-to-VTEP mappings to all VEMs participating in a VXLAN segment.

Other Goodies

Cisco also increased the maximum number of VEMs a single VSM can control to 128, and the maximum number of virtual ports per VSM (DVS) to 4096.

Does It Matter?

Sure it does. The requirement to use IP multicast to implement VXLAN flooding was a major showstopper in data centers that have no other need for IP multicast (almost everyone apart from financial institutions dealing with multicast-based market feeds). Unicast-only VXLAN will definitely simplify VXLAN deployments and increase its adoption.

MAC distribution mode is a nice-to-have feature that you’d need primarily in large-scale cloud deployments. Most reasonably sized enterprise data centers can probably live happily without it (of course I might be missing something fundamental – do write a comment).

The Caveats

The original VXLAN proposal was a data-plane-only solution – boxes from different vendors (not that there would be that many of them) could freely interoperate as long as you configured the same IP multicast group everywhere.

Unicast-only VXLAN needs a signaling protocol between VSM (or other control/orchestration entity) and individual VTEPs. The current protocol used between VSM and VEMs is probably proprietary; Cisco claims to plan to use VXLAN over EVPN for inter-VSM connectivity, but who knows when the Nexus 1000V code will ship. In the meantime, you cannot connect a VXLAN segment using unicast-only VXLAN to a third-party gateway (example: Arista 7150).

Due to the lack of inter-VSM protocol, you cannot scale a single VXLAN domain beyond 128 vSphere hosts, probably limiting the size of your vCloud Director deployment. In multicast VXLAN environments the vShield Manager automatically extends VXLAN segments across multiple distributed switches (or so my VMware friends are telling me); it cannot do the same trick in unicast-only VXLAN environments.

More information

Relevant webinars:

Blog posts discussing VXLAN scalability and IP multicast:

6 comments:

  1. Now we just need VXLAN to VTEP syncing :)

    I have some beta Brocade 6740 running VTEP VXLAN but lack of unicast VXLAN makes not worth it.

    Can´t wait for the day where i dont need to build "fabrics" nexus,vdx,etc... L3!

  2. Multicast is an obvious barrier to doing VXLAN across a DCI such as Carrier MPLS. I seem to recall you previously concluded that VXLAN between data centers is not ideal. Does this development have any bearing in that regard?
    Replies
    1. They added no features I mentioned in http://blog.ioshints.info/2012/11/vxlan-is-not-data-center-interconnect.html, so from the DCI perspective nothing changed (IMHO).
    2. They added no features I mentioned in http://blog.ioshints.info/2012/11/vxlan-is-not-data-center-interconnect.html, so from the DCI perspective nothing changed (IMHO).
  3. As for me I'm trying to understand how to implement non-multicast mode of VMware Networking&Security Manager's VXLAN which reffered by statement "Configure one VLAN between the two clusters for carrying the VXLAN transport traffic. By provisioning one VLAN for transport, the requirement to enable multicast routing on the physical switches is eliminated." of VMware VXLAN Deployment Guide. Funny fact is that on the 14th page of guide reader again faces vxlan-sid <-> multicast mapping configuration section. My expirience of deployment VMWare's VXLAN implementation is described here:
    https://communities.vmware.com/thread/470752
    Replies
    1. VMware DOES NOT have unicast VXLAN in vCNS (you might get it to work if you combine vShield Manager with Cisco Nexus 1000V).

      What they're telling you is to connect VXLAN interfaces of all servers to the same layer-2 domain (yeah, that's totally reliable ;)) - you don't need IP multicast support on routers if the multicast traffic stays within a single subnet.
Add comment
Sidebar