We’ve been hearing how the networking is the last bastion of rigidity in the wonderful unicorn-flavored virtual world for the last few years. Let’s see why it’s so much harder to virtualize the networks as opposed to compute or storage capacities (side note: it didn’t help that virtualization vendors had no clue about networking, but things are changing).
When you virtualize the compute capacities, you’re virtualizing RAM (well-known problem for at least 40 years), CPU (same thing) and I/O ports (slightly trickier, but doable at least since Intel rolled out 80286 processors). All of these are isolated resources limited to a single physical server. There’s zero interaction or tight coupling with other physical servers, there’s no shared state, so it’s a perfect scale-out architecture – the only limiting factor is the management/orchestration system (vCenter, System Center …).
So-called storage virtualization is already a fake (in most cases) – hypervisor vendors are not virtualizing storage, they’re usually using a shared file system on LUNs someone already created for them (architectures with local disk storage use some variant of a global file system with automatic replication). I have no problem with that approach, but when someone boasts how easy it is to create a file on a file system as compared to creating a VLAN (= LUN), I get mightily upset. (Side note: why do we have to use VLANs? Because the hypervisor vendors had no better idea).
There’s limited interaction between hypervisors using the same file system as long as they only read/write file contents. The moment a hypervisor has to change directory information (VMware) or update logical volume table (Linux), the node doing the changes has to lock the shared resource. Due to SCSI limitations, the hypervisor doing the changes usually locks the whole shared storage, which works really well – just ask anyone using large VMFS volumes accessed by tens of vSphere hosts. Apart from the locking issues and shared throughput (SAN bandwidth and disk throughput) between hypervisor hosts and storage devices, there’s still zero interaction between individual VMs or hypervisors hosts – scaling storage is as easy (or as hard) as scaling files on a shared file system.
In the virtual networking case, there was extremely tight coupling between virtual switches and physical switches and there always will be tight coupling between all the hypervisors running VMs belonging to the same subnet (after all, that’s what networking is all about), be it layer-2 subnet (VLAN/VXLAN/…) or layer-3 routing domain (Hyper-V).
Because of the tight coupling, the virtual networking is inherently harder to scale than the virtual compute or storage. Of course, the hypervisor vendors took the easiest possible route, used simplistic VLAN-based layer-2 switches in the hypervisors and pushed all the complexity to the network edge/core, while at the same time complaining how rigid the network is compared to their software switches. Of course it’s easy to scale out totally stupid edge layer-2 switches with no control plane (that have zero coupling with anything else but the first physical switch) if someone else does all the hard work.
Not surprisingly, once the virtual switches tried to do the real stuff (starting with Cisco Nexus 1000v), things got incredibly complex (no surprise there). For example, Cisco’s Nexus 1000V only handles up to 128 hypervisor hosts (because the VSM runs the control plane protocols). VMware NSX is doing way better because they decoupled the physical transport (IP) from the virtual networks – controllers are used solely to push forwarding entries into the hypervisors when the VMs are started or moved around.
Summary: Every time someone tells you how network virtualization will get as easy as compute or storage virtualization, be wary. They probably don’t know what they’re talking about.