Soft (hypervisor) switching links

Martin Casado and his team have published a great series of blog articles describing hypervisor switching (for the VMware-focused details, check out my VMware Networking Deep Dive). It starts with an overview of Open vSwitch (the open source alternative for VMware’s vSwitch, commonly used in Xen/KVM environments), describes the basics of hypervisor-based switching and addresses some of the performance myths. There’s also an interesting response from Intel setting straight the SR-IOV facts.

After reading all those articles, you should start wondering:

  • Why the heck would I need Cisco’s VN-Link (remember: it’s not the same as VN-Tag)?
  • What is EVB bringing to the table? (hint: you might find the answer here)

As a side effect, you might also agree with me that VEPA is truly totally broken.

4 comments:

  1. Pass through switching (VM-FEX) is not just about a latency benefit. The primary benefit is in the area of simplifying the linkage between the VM and the physical network. With pass through (VM-FEX), lower latency, less server CPU, etc, is just the icing on the cake.
  2. 100% agree. VM-aware networking enhancement is not just about resource consumption or speed.

    Designing a virtualization-aware network in practice requires lots of design work, study of failover scenarios and high/low level design effort. While EVB solutions do the job very effectively (Nexus 1000v being a great example of that), VM-Fex eliminates that extra layer of design, troubleshooting, configuration and management. This is huge in real life production environments when you have to deal with complex virtual-machine environments (with SAN and NAS storage networking, several management domains, different security requirements and traffic separation policies).

    Virtual networking directly at the hardware layer paints a very simplified picture, with predictable behaviour and easy troubleshooting. I like that picture, and depending on the scenario I may prefer it to an embedded EVB or soft-switch.
  3. For me, vSwitch is a bless.

    In the old, non virtual days, all I configured for a server was the VLAN is was on and do QoS marking.

    Now with vSwitch:
    * The server team is doing the VLAN mapping. Much less work for me.
    * The physical SW is doing the QoS marking.
    * vSwitch handles the load-balancing between ESX uplinks.
    * No spanning-tree issues
    * Life is good.

    I don't care about "giving" the server team the ability to map servers to VLANs, its stupid work. WHy should I do that.
  4. This study conveniently ignores TCP segmentation offload, checksum calculation offload, CRC offload etc. that have to be done in software with soft switching.
Add comment
Sidebar