OpenFlow: A perfect tool to build SMB data center
When I was writing about the NEC+IBM OpenFlow trials, I figured out a perfect use case for OpenFlow-controlled network forwarding: SMB data centers that need less than a few hundred physical servers – be it bare-metal servers or hypervisor hosts (hat tip to Brad Hedlund for nudging me in the right direction a while ago)
The Dream
As you can imagine, it’s extremely simple to configure an OpenFlow-controlled switch: configure its own IP address, management VLAN, and controller’s IP address, and let the controller do the rest.
Once the networking vendors figure out “the fine details”, they could use dedicated management ports for out-of-band OpenFlow control plane (similar to what QFabric is doing today), DHCP to assign an IP address to the switch, and a new DHCP option to tell the switch where the controller is. The DHCP server would obviously run on the OpenFlow controller, and the whole control plane infrastructure would be completely isolated from the outside world, making it pretty secure.
The extra hardware cost for significantly reduced complexity (no per-switch configuration and a single management/SNMP IP address): two dumb 1GE switches (to make the setup redundant), hopefully running MLAG (to get rid of STP).
Finally, assuming server virtualization is the most common use case in a SMB data center, you could tightly couple OpenFlow controller with VMware’s vCenter, and let vCenter configure the whole network:
- CDP or LLDP would be used to discover server-to-switch connectivity;
- OpenFlow controller would automatically download port group information from vCenter and automatically provision VLANs on server-to-switch links.
- Going a step further, OpenFlow controller could automatically configure static port channels based on load balancing settings configured on port groups.
End result: decently large layer-2 network with no STP, automatic multipathing, and automatic adjustment to VLAN changes, with a single management interface, and the minimum number of moving parts. How cool is that?
Scenario#1 – GE-attached servers
If you decide to use GE-attached servers, and run virtual machines on them, it would be wise to use four to six uplinks per hypervisor host (two for VM data, two for kernel activities, optionally additional two for iSCSI or NFS storage traffic).
You could easily build a GE Clos fabric using switches from NEC America: PF5240 (ToR switch) as leaf nodes (you’d have almost no oversubscription with 48 GE ports and 4 x 10GE uplinks), and PF5820 (10 GE switch) as spine nodes and interconnection point with the rest of the network.
Using just two PF5820 spine switches you could get over 1200 1GE server ports – enough to connect 200 to 300 servers (around 5000 VMs).
You'd want to keep the number of switches controlled by the OpenFlow controller low to avoid scalability issues. NEC claims they can control up to 50 ToR switches with a controller cluster; I would be slightly more conservative.
Scenario#2 – 10GE attached servers
Things get hairy if you want to use 10GE-attached servers (or, to put it more diplomatically, IBM and NEC are not yet ready to handle this use case):
- If you want true converged storage with DCB, you have to use IBM’s switches (NEC does not have DCB), and even then I’m not sure how DCB would work with OpenFlow.
- PF5820 (NEC) and G8264 (IBM) have 40GE uplinks, but I have yet to see a 40GE OpenFlow-enabled switch with enough port density to serve as the spine node. At the moment, it seems that bundles of 10GE uplinks are the way to go.
- It seems (according to data sheets, but I could be wrong) NEC supports 8-way multipathing, and we’d need at least 16-way multipathing to get 3:1 oversubscription.
Anyhow, assuming all the bumps eventually do get ironed out, you could have a very easy-to-manage network connecting a few hundred 10GE-attached servers.
Will it ever happen?
I remain skeptical, mostly because every vendor seems obsessed with cloud computing and zettascale data centers, ignoring mid-scale market … but there might be silver lining. This idea would make most sense if you’d be able to buy a prepackaged data center (think VCE block) at a reasonably low price (to make it attractive to SMB customers).
A few companies have all the components one would need in a SMB data center (Dell, HP, IBM), and Dell just might be able to pull it off (while HP is telling everyone how they’ll forever change the networking industry). And now that I’ve mentioned Dell: how about configuring your data center through a user-friendly web interface, and have it shipped to your location in a few weeks.
More information
If you need to know more about data centers, network virtualization, and OpenFlow, you might find these webinars relevant:
- Start with Introduction to Virtualized Networking;
- Generic data center technologies and designs are described in Data Center 3.0 for Networking Engineer, large-scale network designs (including leaf & spine and Clos network architectures) in the Data Center Fabric Architectures webinar.
- To find more about OpenFlow, watch our OpenFlow Deep Dive webinar
- Learn everything there is to know about VMware’s vSwitch and other VMware-related networking solutions in VMware Networking Deep Dive.
And don’t forget: you get access to all these webinars (and numerous others) if you buy the yearly subscription.
Or are you talking about a few hundred ESXi hosts? in that case, would it still be SMB?
http://www.jedelman.com/1/post/2012/01/future-openflowsdn-applications.html - Thanks again for plugging it in your interesting links blog!
That type of integration is clutch for any OF/SDN vendor to win out in the SMB AND Enterprise quite simply because VMWare dominates in those market segments.
Love the focus on the smaller customers though.
OpenFlow is a solution looking for a problem in this sort of SMB environment, and something like TRILL with a good centralized configuration management tool is much more attractive.
SMBs typically have "IT generalists" on staff, not network specialists. Typically a few combination DBA/server/network engineers, who also have a little software development responsibility to boot. These folks need simple plug-and-play behavior more than anything else, and I suspect any OpenFlow-based solution will not be "simple" to implement for a long while.
It's all well and good to have some consultants come in during implementation, but an SMB will not pay those same consultants long-term for maintenance and upgrades, so simplicity and maintainability is a primary business driver.
This is why SMBs love VMware: vCenter is the closest thing to a "single pane of glass" for management the IT industry has ever produced, especially if your storage vendor has plug-ins. And it's quite simple to keep it running.