What is OpenFlow?

My Open Networking Foundation rant got several thoughtful responses focusing on “what is OpenFlow and what can we do with it?” Let’s start with the easy part first: What exactly is OpenFlow?

A typical networking device (bridge, router, switch, LSR ...) has control and data plane. The control plane runs all the control protocols (including port aggregation, STP, TRILL, MAC address learning and routing protocols) and downloads the forwarding instructions into the data plane structures, which can be simple lookup tables or specialized hardware (hash tables or TCAMs). In distributed architectures, the control plane has to use a communications protocol to download the forwarding information into data plane instances. Every vendor uses its own proprietary protocol (Cisco uses IPC – InterProcess Communication – to implement distributed CEF); OpenFlow tries to define a standard protocol between control plane and associated data plane elements.

The OpenFlow zealots would like you to believe that we’re just one small step away from implementing Skynet; the reality is a bit more sobering. You need a protocol between control and data plane elements in all distributed architectures, starting with modular high-end routers and switches. Almost every modular high-end switch that you can buy today has one or more supervisor modules and numerous linecards performing distributed switching (preferably over a crossbar matrix, not over a shared bus). In such a switch, OpenFlow-like protocol runs between supervisor module(s) and the linecards.

Moving into more distributed space, the Borg fabric architectures use an OpenFlow-like protocol between the central control plane and forwarding instances. You might have noticed that all vendors link at most two high-end switches into Borg architecture at the moment; this decision has nothing to do with vendor lock-in and lack of open protocols but rather reflects the practical challenges of implementing a high-speed distributed architecture (alternatively, you might decide to believe the whole networking industry is a confusopoly of morons who are unable to implement what every post-graduate student can simulate with open source tools).

Moving deeper into the technical details, the OpenFlow Specs page on the OpenFlow web site contains a link to the OpenFlow Switch Specification v1.1.0, which defines:

  • OpenFlow tables (the TCAM structure used by OpenFlow);
  • OpenFlow channel (the session between an OpenFlow switch and an OpenFlow controller);
  • OpenFlow protocol (the actual protocol messages and data structures).

The designers of OpenFlow had to make the TCAM structure very generic if they wanted to offer an alternative to numerous forwarding mechanisms implemented today. Each entry in the flow tables contains the following fields: ingress port, source and destination MAC address, ethertype, VLAN tag & priority bits, MPLS label & traffic class, IP source and destination address (and masks), layer-4 IP protocol, IP ToS bits and TCP/UDP port numbers.

OpenFlow 1.0 does not support MPLS-related fields.

To make the data plane structures scalable, OpenFlow introduces a concept of multiple flow tables linked into a tree (and group tables to support multicasts and broadcasts). This concept allows you to implement multi-step forwarding, for example:

  • Check inbound ACL (table #1)
  • Check QoS bits (table #2)
  • Match local MAC addresses and move into L3/MPLS table; perform L2 forwarding otherwise (table #3)
  • Perform L3 or MPLS forwarding (tables #4 and #5).

You can pass metadata between tables to make the architecture even more versatile.

OpenFlow 1.0 uses a single TCAM (flow table) and is thus totally boring compared to rich OpenFlow 1.1 functionality.

The proposed flow table architecture is extremely versatile (and I’m positive there’s a PhD thesis being written proving that it is a superset of every known and imaginable forwarding paradigm), but it will have to meet the harsh reality before we’ll see a full-blown OpenFlow switch products. You can implement the flow tables in software (in which case the versatility never hurts, but you’ll have to wait a few years before the Moore Law curve catches up with terabit speeds) or in hardware where the large TCAM entries will drive the price up.

OpenFlow 1.0 is close enough to TCAMs implemented in actual products that we might see shipping products in near future; we’ll probably have to wait at least a few years before we’ll see a full-blown hardware product implementing OpenFlow 1.1.

More information

To learn more about modern data center architectures and evolving fabric technologies, watch my Data Center 3.0 for Networking Engineers webinar (buy a recording or yearly subscription).

3 comments:

  1. <quote>
    A typical networking device (bridge, router, switch, LSR ...) has control and data plane.

    You missed the management plane!</quote>

    ReplyDelete
  2. Ivan Pepelnjak05 April, 2011 18:50

    You're absolutely correct, but I didn't want to go there as OpenFlow works between control and data plane.

    ReplyDelete
  3. Is it correct to say that we can isolate the control plane and the data plane using OpenFLow? And what that exactly means?

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Ivan Pepelnjak, CCIE#1354, is the chief technology advisor for NIL Data Communications. He has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced technologies since 1990. See his full profile, contact him or follow @ioshints on Twitter.