802.1BR – same old, same old

A while ago, a tweet praising the wonders of 802.1BR piqued my curiosity. I couldn’t resist downloading the latest draft and spending a few hours trying to decipher IEEE language (as far as the IEEE drafts go, 802.1BR is highly readable) ... and it was déjà vu all over again.

Short summary: 802.1BR is repackaged and enhanced 802.1Qbh (or the standardized version of VM-FEX). There’s nothing fundamentally new that would have excited me.

Compared to Edge Virtual Bridging (EVB, 802.1Qbg) 802.1BR does have a few interesting twists: you can have hierarchical port extenders, which means that 802.1BR gives you a standardized way to connect (for example) a hypervisor host to a Nexus 5000 through a port extender (Nexus 2000), and see each VM as a separate interface on the Nexus 5000/5500. Whether that solves your management or scalability problems is a different question.

Cisco touts numerous advantages of VM-FEX including:

  • Feature richness. Physical switches have richer feature sets than hypervisor switches. Those features are definitely nice to have, but do you need them? Are all of them available on dynamic port extender interfaces?
  • Better security. To be precise, you can tightly control the VM traffic with ACLs on physical switches and a few other features like IP source guard. For those that believe in ACL-based security, VM-FEX (or 802.1BR) is a perfect solution ... but will ACLs on ToR switch really solve your security problems?
  • Visibility into inter-VM traffic. This might have been an argument in pre-vSphere 5 days; vSphere 5 has built-in SPAN.
  • Increased performance. VM-FEX with hypervisor bypass significantly increases the performance of I/O intensive VMs.

Although these features make VM-FEX highly attractive, it’s still bridging, and the best you can do on a Nexus 5500 (not yet on UCS Fabric Interconnect) is to bridge the VM-generated traffic into the Fabric Path core. As I said, same old, same old; the hypervisor vendors have already moved on.

More information

If you’re new to virtualized networking and would like to understand what this is all about, start with the Introduction to virtualized networking webinar. You’ll find more advanced topics in VMware Networking Deep Dive and Cloud Computing Networking webinars (the latter now includes a 1,5-hour long section on IaaS scalability). All webinars are available as individual recordings or as part of the yearly subscription.

Finally, if you’d like me to review your virtualized data center design or discuss various technology options, check out the ExpertExpress.

7 comments:

  1. The fact that its a IEEE 802.xx standard may indicate why its lagging industry outlook.

    --

    I disagree that VM-FEX are 'highly attractive'. I struggle to find any operational benefits for most if not all virtualisation customers.

    AFAIK - The hypervisor bypass features usally grant speed and remove or limit functionality.

    At a strech I could envision using VM-FEX's to break out a virtual firewall/NAS-headend/backup-server etc, to gain performance improvements.

    But moving away from the centralised operational model (i.e vCenter) is a _significant_ downside.
    Replies
    1. Umm... the management is provided by two different groups, so VM-FEX solves the political problem as well.

      Generally the network group manages the networking and systems/server group manages the VM piece. So VMFEX/802.1BR actually decomplicates things drastically.

      The network person creates the port-profiles in the Nexus gear upstream which just show up in the VM as vSwitch options for the Sys admin to attach the VM to.

      The sys admin gets a user friendly name of the VLAN without having to configure any of the networking components (Teaming, Hash algorithm, VLAN Tags, Spanning-tree, etc.)

      Beyond that, there is the benefits of the VIC providing direct access to the physical NIC's memory space through VMDirectPath, which removes the 2 Gbps limitation of the hypervisor vSwitch meaning in the not so distant future when we have 40 Gbps NICs in the back of a node you can actually use all of that for 1 VM.
    2. 2 Gbps per vSwitch is a myth, maybe persisting from the old days of vSphere 3. I was able to push 10Gbps through a 10GE uplink with vSphere 4 (and 17 Gbps between VMs in the same hypervisor).
  2. I was one of the developer of IEEE-802.1BR, so in case you want to know the real story let me know :-), however something I can unveil here 802.1QbH and 802.1BR is exactly the same and there is a clear reason why we renamed this, would be a longer thing, but in case of interest I can share the entire history with you ... .
    Everything called Fex from us Cisco is based on BR (actually running on the pre-standard VNTAG), chassis Fex (nexus 2k) Adapter Fex, UCS FI, or VMFex (which is nothing else than an adapter Fex with some more os sw on top bringing the Fex (bridge port extender) into an OS. .... Best regards Rene
    Replies
    1. Rene, I'm very interested in hearing the story
    2. Please provide the full story Rene... I am interested in it.
  3. Yeah I'm kind of interested to know the fule story as well Rene if you don't mind telling it.
Add comment
Sidebar