NEC Launched a Virtual OpenFlow Switch – Does It Matter?
On January 22nd NEC launched another component of their ProgrammableFlow architecture: a virtual switch for Hyper-V 3.0 environment. The obvious questions to ask would be: (a) why do we care and (b) how’s that different from Nicira or BigSwitch.
TL&DR summary: It depends.
What is it?
The ProgrammableFlow 1000 is a virtual switch for the Hyper-V 3.0 environment tightly integrated with the existing ProgrammableFlow physical data center fabric.
It’s also the first OpenFlow virtual switch that works on a non-Linux virtualization platform – Nicira (Open vSwitch), Contrail/Juniper and Midokura have Linux-based virtual switches (Nicira’s VMware hack doesn’t count) that you can use in KVM, Xen or Linux containers-like environments, but not with more traditional enterprise hypervisors.
It’s obvious the (former) startups I just mentioned target large cloud providers (no wonder, you have to show a huge use case if you want to attract VC funding or get acquired), while NEC targets enterprises and their private clouds.
How is it different from Nicira?
Nicira’s NVP focuses on hypervisor switches and assumes the underlying physical network provides IP transport or VLANs. NVP does not touch the physical switches – a clear scalability advantage and a clear operational drawback for mid-size enterprise environments.
ProgrammableFlow has a different approach: all physical and virtual switches are controlled by a cluster of ProgrammableFlow controllers. Physical and virtual parts of the data center network are tightly integrated and appear as a single virtual device.
Nicira is ideal for large cloud environments with thousands of servers; ProgrammableFlow is ideal for mid-size enterprises with hundreds of VMs, tens of physical servers and a few switches connecting them together.
How is it different from BigSwitch?
There’s no significant difference from the high-level marketecture perspective ... but in practice there might be a huge gap between theory and practice.
ProgrammableFlow has been shipping for more than 18 months, and they probably caught numerous real-life glitches that BigSwitch hasn’t been exposed to yet. Also, NEC decided to tightly integrate their controller with their switches, while BigSwitch works with a large ecosystem of partners. Make your own conclusions.
Finally, does it matter?
With all hypervisor vendors loudly jumping on the overlay networking bandwagon the ProgrammableFlow approach seems to be an evolutionary dead branch given today’s set of facts.
However, operators of mid-sized enterprise data centers just might prefer single integrated and tested configuration/control/management entity over infinitely scalable build-it-yourself one ...and don’t forget that ProgrammableFlow with its VLAN-based encapsulation works with existing firewalls and load balancers, whereas you still need virtual appliances or brand-new hardware to connect the brave new virtual overlay world to the physical reality.
Why is "not touching the physical switches" an operational drawback? It's not quite clear to me. Could you explain a bit?
Isn't it easier to configure hypervisors than managing switch state?
It seems to me that everyone's chasing solutions that scale up to enormous cloud networks, but the reality is that relatively few of us will work on networks of that scale. Most of us have real problems today, on typical medium-size networks, that as you say have tens of physical machines and hundreds of VMs.
Looking forward to the webinar. 5AM my time (ouch).
Does Juniper have a virtual switch ?
Who wants to spend another century with the bridge kludge that was invented to solve problems we don't have today (LAT and other oldies)?
Also, L3-only approach might require programmers to actually understand the concept of mapping FQDN into IP address through gethostbyname() function ... and then there's the "slight" problem of having multihomed servers with no routing protocol.
Programmers already face the mapping challenge in the current environment.
Multihomed servers are a source of problems today (asymetric routing paths and L2 switch flooding all your neighbors comes to mind). Basically servers don't need too much routing intelligence, they could just send trafic back to the interface it came in from (that's more or less what I've done on a Linux box recently). With some basic security screening we could even revive ICMP redirects.
Except for the vendor part, I think this is doable (can't be worse than bridging, no?)