Build the Next-Generation Data Center
6 week online course starting on September 1st

What Is OpenFlow (Part 2)?

Got this set of questions from a CCIE pondering emerging technologies that could be of potential use in his data center:

I don’t think OpenFlow is clearly defined yet. Is it a protocol? A model for Control plane – Forwarding plane FP interaction? An abstraction of the forwarding-plane? An automation technology? Is it a virtualization technology? I don’t think there is consensus on these things yet.

As the OpenFlow Symposium is just a few weeks away, let’s try to position OpenFlow in the big picture.

OpenFlow is very well defined. It’s a control plane (controller) – data plane (switch) protocol that allows control plane to:

  • Modify forwarding entries in the data plane;
  • Send control protocol (or data) packets through any port of any controlled data-plane devices;
  • Receive (and process) packets that cannot be handled by the data plane forwarding rules. These packets could be control-plane protocol packets (for example, LLDP) or user data packets that need special processing.

As part of the protocol, OpenFlow defines abstract data plane structures (forwarding table entries) that have to be implemented by OpenFlow-compliant forwarding devices (switches).

Is it an abstraction of the forwarding plane? Yes, as far as it defines data structures that can be used in OpenFlow messages to update data plane forwarding structures.

Is it an automation technology? No, but it can be used to automate the network deployments. Imagine a cluster of OpenFlow controllers with shared configuration rules that use packet carrying capabilities of OpenFlow protocol to discover network topology (using LLDP or a similar protocol), build a shared topology map of the network, and use it to download forwarding entries into the controlled data planes (switches). Such a setup would definitely automate new device provisioning in a large-scale network.

Alternatively, you could use OpenFlow to create additional forwarding (actually packet dropping) entries in access switches or wireless access points deployed throughout your network, resulting in a scalable multi-vendor ACL solution.

Is it a virtualization technology? Of course not. However, its data structures can be used to perform MAC address, IP address or MPLS label lookup and push user packets into VLANs (or push additional VLAN tags to implement Q-in-Q) or MPLS-labeled frames, so you can implement most commonly used virtualization techniques (VLANs, Q-in-Q VLANs, L2 MPLS-based VPNs or L3 MPLS-based VPNs) with it.

There’s no reason you couldn’t control soft switch (embedded in the hypervisor) with OpenFlow. An open-source hypervisor switch implementation (Open vSwitch) that has “many extensions for virtualization” is already available and can be used with Xen/XenServer (it’s the default networking stack in XenServer 6.0) or KVM.

I’m positive the list of Open vSwitch extensions is hidden somewhere in its somewhat cryptic documentation (or you could try to find them in the source code), but the list of OpenFlow 1.2 proposals implemented by Open vSwitch or sponsored by Nicira should give you some clues:

  • IPv6 matching (cool, finally IPv6 support) with IPv6 header rewrite (so they must be aiming at L3 hypervisor switch ... even cooler);
  • Virtual Port Tunnel configuration protocol and GRE/L3 tunnel support. Obviously they’re developing a VXLAN competitor or even IP-over-IP solution.
  • Controller master/slave switch. A must for resilient large-scale solutions.

Summary: OpenFlow is like C++. You can use it to implement all sorts of interesting solutions, but it’s just a tool.

And a completely tangential thought: Reading the analysis of recent HP launch @ Twilight in the Valley of the Nerds, it was refreshing to see Saar Gillai’s (CTO of HP networking) down-to-earth description of OpenFlow’s potential and its current maturity (including drawbacks and missing features).

More information

I wrote about OpenFlow before, read the What is OpenFlow post for more technical details. I also mentioned it in my Data Center Fabric Architectures presentation @ EuroNOG 2011.

And here’s the usual end-of-post blurb:

The concepts and challenges of virtualized networking are described in the Introduction to Virtualized Networking webinar.

Looking for big-picture perspective or in-depth discussions of various data center and network virtualization technologies? Check my Data Center 3.0 for Networking Engineers (recording) and VMware Networking Deep Dive (recording) webinars. Both webinars are also available as part of the Data Center Trilogy and you get access to all three of them (and numerous others) as part of the yearly subscription.

1 comment:

  1. Openflow (at least 1.0) probably best compares to Tcl, with C++ being noteworthy for its heavyweight complexity ;) In my opinion, the main selling point for Openflow in large-scale (100k+ servers) data-centers is significant simplification (if not complete reduction) of network control plane. Complex, multi-vendor software is the main barrier on the way to unified automation and improved network stability. In a highly-specialized networks, such as simple symmetric full-bisection bandwidth topologies, feature richness of common control/management plane stacks becomes unnecessary, more likely turning into source of software bugs and operational headache. On the "innovation" side, being praised by Openflow adepts, large scale networks would probably mostly benefit from advanced, automated, troubleshooting tools not found in modern IP networks.


You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.