Hyper-V 3.0 Extensible Virtual Switch

It took years before the rumored Cisco vSwitch materialized (in the form of Nexus 1000v), several more years before there was the first competitor (IBM Distributed Virtual Switch), and who knows how long before the third entrant (recently announced HP vSwitch) jumps out of PowerPoint slides and whitepapers into the real world.

Compare that to the Hyper-V environment, where we have at least two virtual switches (Nexus 1000V and NEC's PF1000) mere months after Hyper-V's general availability.

The difference: Microsoft did the right thing, created an extensible vSwitch architecture, and thoroughly documented all the APIs (there's enough documentation that you can go and implement your own switch extension if you're so inclined).

A short video taken from Virtual Firewalls webinar describes the extensible architecture of Hyper-V virtual switch.

Short Summary

  • A switch extension can be a capturing, filtering, or forwarding extension;
  • Capturing extensions can capture packets and generate their own packets (example: report to an sFlow collector, implement ERSPAN functionality);
  • Filtering extensions can inspect, drop (traffic policing or filtering) or delay (traffic shaping) packets, as well as generate their own packets;
  • Forwarding extensions can do all of the above, plus replace the default forwarding rules (specify their own set of output ports). Each packet can be set to one or more output ports to implement flooding, multicast, or SPAN-like behavior.
  • Each extension is called twice, first on the Ingress path (input VM or port to switch), then (when the set of destination physical or virtual ports is known) on the Egress path (switch to set of output ports).

It seems Microsoft really learned the hard lessons of the circuitous history of virtual networking, and it looks like they did the right thing … assuming the highly extensible mechanism they implemented doesn’t bog down the switching performance. Time to do some performance tests. Any volunteers?

9 comments:

  1. Ivan - it's my understanding that Open vSwitch can also run in Hyper-V. Is the proliferation of vSwitches a good thing for users? It can be a useful control point (NEC and HP using for SDN purposes), but it can also be a lock-in.
    Thanks,
    Stu Miniman
    Wikibon.org
    Twitter: @stu
    Replies
    1. I don't think there's a shipping OVS running in Hyper-V (at least I haven't found anything on their web site, which is the minimum requirement I'd consider something "shipping").

      You might want to have the same configuration interface/API and the same overlay networking model across multiple hypervisors, in which case a vSwitch running on multiple platforms (ex: Nexus 1000V) comes handy.
    2. Ivan, I spoke with Alessandro Pilotti (CloudBase Solutions) two days ago at OpenStack CEE Day in Budapest. He said they will announce OVS Hyper-V integration. Unfortunately, I don't have any details, as I attended different track and didn't hear the actual talk, but I guess all the info should be out in the next few days?
    3. Thanks for the feedback. Based on their web site, they already have Hyper-V Quantum plugin which works together with OVS in VLAN mode. If they have more than that, do tell him that I'm just an email away ;)

      Best,
      Ivan
  2. I used to think of it as lock in, but as you said Stu, it's a control point. Why should the virtual switch be controlled by 2 or 3 vendors/products when the physical network has many more than that?

    I think we'll see more vswitches emerge in some form or another (open source extensions or commercial products).

    So, is the proliferation of vswitches a good thing for users? Not sure yet, but choice is good, so that in it of itself is usually a positive thing...for the user :)

    -Jason
    Replies
    1. The vSwitch is a powerful control point and the barrier to entry is significant. Rather than proliferating virtual switches, I think users are given more choice if each platform's virtual switch uses an extension model like Microsoft's. Extensibility lowers the barrier to entry, making it easy to develop new security, monitoring and forwarding capabilities and giving users the ability to mix and match modules in a way that make sense in their environment.

      For example, InMon developed a free traffic monitoring extension implementing the sFlow standard on the Hyper-V extensible vSwitch and our module can be combined with the default switching module, or with Cisco or NEC forwarding modules. The following article describes our experiences with the Hyper-V extensible vSwitch:

      Microsoft Hyper-V

      One of the reasons Apache has been so successful is the rich ecosystem of modules that has developed around it. It would be great to see vSwitches develop into similarly open platforms.
  3. Microsoft needs third party extension while VMware often doesn't. Even after the release of the 1000v, many VMware implementations bypassed it. The native capabilities are "good enough".
  4. First of all, thank you for great post.
    Anyway I have a question about Hyper-V and ESXi standard switch. Currently I'm doing a performance evaluation of different vSwitches as a part of my research work. My intention is to evaluate perormance of bare vSwitches by using simple bridging of network traffic over physical NICs (this way I can use external testing equipment). However, this is possible for OpenvSwitch, but by default bridging between pNICs is disabled in Hyper-V and ESXi due to the lack of STP support (so traffic loops can not occur). However, if I install PF1000, pNICS bridging can be enabled in Hyper-V, unlike in ESXi (even if I use Cisco 1000V).
    My question is, if you happen to know the answer: Is there any way to disable this loop prevention in standard ESXi vSwitch or in standard Hyper-V (maybe some debug or serivce mode exists, where this woud be possible)?
    Replies
    1. Let me rephrase - you decided to measure the forwarding path that is never used in real life and thus probably not optimized. The measurements of forwarding performance between physical NICs are irrelevant to how these products are used.

      The validity of such results is questionable, their usefulness is not.
Add comment
Sidebar