VXLAN: awesome or braindead?

Just a few hours after VXLAN was launched, I received an e-mail from one of my readers asking (literally) if VXLAN was awesome or braindead. I decided to answer this question (you know the right answer is it depends) and a few others in a FastPacket blog post published by SearchNetworking.

I wrote the post before NVGRE was published and missed the “brilliant” idea of using GRE key as virtual segment ID.

Read more @ SearchNetworking

6 comments:

  1. Ivan,

    I couldn't comment on searchnetworking so I'll do it here. That was a great post on a new product that I know nothing about.

    Thanks again, bloggers like you make my job that much more fun.
  2. Great post here at Search networking talking about VXLAN. Clearly, this is great for massive data centers. Being able to tunnel from VEM to VEM not needing to tag 1000s of VLANs from physical switch to ESX host is great...keeping the DC on a solid proven L3 design...even better. Now my question is, is fabric path really going to go anywhere? The whole point was to increase the size L2 domains to allow for VM mobility, right? How about those Cisco F1 cards required for fabric path? I think VXLANs could make for a better overall DC solution assuming there is an easier way to exit that particular VXLAN/subnet.

    What do you think?
  3. As I wrote in the original VXLAN post (http://blog.ioshints.info/2011/08/finally-mac-over-ip-based-vcloud.html) - good bye, large scale bridging (including FP) and EVB.

    Of course, we have to wait for physical device termination (at least in enterprise data centers) and for the technology to mature, but at least the path forward is clear. I don't think you'll see to many VLANs in a state-of-the-art DC in a few years.
  4. Great, I read that article too, but just wanted clarification. When you say not too many VLANs in the data center in a few years, what exactly are you saying is driving that? That was happening with FP/EVB anyway, right? With technologies like VXLAN, as many VLANs as there are today could still exist since there will need to be an SVI of some sort for each VLAN and there would not be a need to reduce the number of VLANs to accomplish server mobility due to a potential L2 full mesh between all virtual switches (and associated physical switches for exit points).
    New item from Ivan Pepelnjak on Cisco IOS Hints and Tricks: VXLAN: awesome or braindead?
    As I wrote in the original VXLAN post (http://blog.ioshints.info/2011/08/finally-mac-over-ip-based-vcloud.html) - good bye, large scale bridging (including FP) and EVB.
    Of course, we have to wait for physical device termination (at least in enterprise data centers) and for the technology to mature, but at least the path forward is clear. I don't think you'll see to many VLANs in a state-of-the-art DC in a few years.
    To respond to this item, you may simply reply to this email or visit the page.
    ---
    You can unsubscribe or
    update
    your notification settings at any time.
    If you believe you have received this message in error, or you are
    experiencing other technical difficulties, please contact
    [email protected],
    we will contact you shortly.
    Social Networking powered by Echo
  5. EVB simplifies 802.1Q VLAN provisioning.

    FP enables large-scale bridging.

    VXLAN removes the need for physical VLANs because virtual segments are no longer created with VLANs but transported over IP.

    Too bad you didn't attend yesterday's webinar.
  6. Too bad, Ivan! Understand all of that...I think I was just having trouble conceptualizing terminating VXLANs terminating on L3/physical devices or maybe it's still a lack of understanding :). One question I always ask as topics like this (and OF) are discussed - where will the default gateway reside? With VXLANs, will it be the virtual switch or physical switch that terminates VXLANs? Will VXLANs terminate on two switches for HA? What's your take?
    Thanks.
Add comment
Sidebar