Open Networking Foundation – fabric craziness reaches new heights

Some of the biggest buyers of the networking gear have decided to squeeze some extra discount out of the networking vendors and threatened them with open-source alternative, hoping to repeat the Linux/Apache/MySQL/PHP saga that made it possible to build server farms out of low-cost commodity gear with almost zero licensing costs. They formed the Open Networking Foundation, found a convenient technology (OpenFlow) and launched another major entrant in the Buzzword Bingo – Software-Defined Networking (SDN).

Networking vendors, either trying to protect their margins by stalling the progress of this initiative, or stampeding into another Wild West Gold Rush (hoping to unseat their bigger competitors with low-cost standard-based alternatives) have joined the foundation in hordes; the list of initial members reads like Who’s Who in Networking.

Now, let’s try to figure out what SDN might be all about. The ONF Mission Statement (on the first page) says “SDN allows owners and operators of networks to control and manage their networks to best serve their needs.” Are the founding members of ONF trying to tell us they have no control over their networks and lack network management systems? It must be something else. How about this one (from the same paragraph): “OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.” Now we’re getting somewhere – I told you it was all about reducing costs (starting with the networking vendors’ margins).

(Some of) the industry media happily joined the craze, parroting meaningless phrases from various press releases. Consider, for example, this article from IT World Canada.

SDN would give network operators the ability to virtualize network resources, being able to dynamically improve latency or security on demand” If you want to do it, you can do it today, using dynamic routing protocols or QoS (latency), vShield/VSG (on-demand security) or a number of virtualized networking appliances.

Also, protocols like RSVP to signal per-session bandwidth needs have been around for more than a decade, but somehow never caught on. Must be the fault of those stupid networking vendors.

Sites like Facebook, Google or Yahoo would be able to tailor their networks so searches would be blindingly fast” I never realized the main search problem was network bandwidth. I always somehow thought it was related to large datasets, CPU, database indices ... Anyhow, if the network bandwidth is the bottleneck, why don’t they upgrade to the next-generation Ethernet (10G/40G). Ah, yes, it might be expensive. How about deploying Clos network architecture? Ouch, might be a nightmare to configure and manage. How exactly will SDN solve this problem?

Stock exchanges could assure brokerage customers on the other side of the globe they’d get financial data as fast as a dealer beside the exchange.” Will SDN manage to flatten & shrink the earth, will it change the speed of light, or will it use large-scale quantum entanglement?

It could be programmed to order certain routers to be powered down during off-peak power periods.” What stops you from doing that today?

Don’t get me wrong – OpenFlow might be a good idea and it will probably lead to interesting new opportunities (assuming they can solve the scalability and resilience issues) ... and I’m absolutely looking forward to the podcast we’re recording later today.

However, there are plenty of open standards in the networking industry (including XML-based network configuration and management) waiting to be used. There are also (existing, standard) technologies that you can use to solve most of the problems these people are complaining about. The problem is that these standards and technologies are not used by operating systems or applications (when was the last time you’ve deployed a server running OSPF to have seamless multihoming?)

The main problems we’re facing today arise primarily from non-scalable application architectures and broken TCP/IP stack. In a world with scale-out applications you don’t need fancy combinations of routing, bridging and whatever-else; you just need fast L3 transport between endpoints. In an Internet with decent session layer or a multipath transport layer (be it SCTP, Multipath TCP or something else) you don’t need load balancers, BGP sessions with end-customers to support multihoming, or LISP. All these kludges were invented to support OS/App people firmly believing in fallacies of distributed computing. How is SDN supposed to change that? I’m anxiously waiting to see an answer beyond marketing/positioning/negotiating bullshit bingo.

Update @ 2011-03-31 20:08 UTC - Just finished the OpenFlow Packet Pushers podcast with Matt Davy ... and he managed to get me excited. It's an interesting technology, it provides cool solutions to some of the problems we have today, but it's also a bit like assembly-language programming: it gives you a lot of rope to hang yourself. The stupidities being written about it are doing it a true disservice.

Prefer to hear about real-life technologies and products?

You’ll learn more about real-life Data Center architectures and technologies, including TRILL, SPB, FabricPath, FCoE and others in my Data Center 3.0 for Networking Engineers webinar (buy a recording or yearly subscription).

10 comments:

  1. Alexandra Stanovska31 March, 2011 13:18

    I was thinking it may had come from this direction http://perspectives.mvdirona.com/2010/10/31/DatacenterNetworksAreInMyWay.aspx - Brad Hedlund linked to this article from one of his posts about datacenters - but I don't see Amazon listed in this initiative :) While James Hamilton may sound inconvenient to many with his opinion, his anger is actually understandable from his point of view.

    ReplyDelete
  2. Some good (and amusing!) points on the buzz/hype surrounding SDN, though it's not the first time I've heard vendor and pet press suggest that they could alter the constants of physics :)

    I'd read a little bit about OpenFlow/SDN before they became an "industry consortium", and it does look like an interesting technology, even though it won't help you determine the vitals of Schrödinger's Cat. I see it as more of a testbed platform, at least initially, that would allow the evolution of large-scale network protocols - think IPv8 - on a global basis, in much shorter timeframes. As you correctly stated, methods exist to do this today, but they aren't simple, and they aren't necessarily universal. I see OpenFlow as having the potential to bring these capabilities down to the enthusiast level, increasing the number of testers and allowing managers and administrators to build their knowledge and work out bugs along with those individuals or teams who have the greatest interest and generally some of the most useful feedback.

    ReplyDelete
  3. Oh, come on, give them a chance :) Salute from IVTF, Prague :)

    ReplyDelete
  4. Ivan Pepelnjak31 March, 2011 17:47

    As I wrote - I think OpenFlow is a good idea and its potential is exactly where you see it: bringing the capabilities to experiment with networking gear to the enthusiast level.

    I still fail to see what SDN is. After figuring that out, I might understand how it relates to OpenFlow :-P

    ReplyDelete
  5. I loathe OpenFlow for it painfully reminds me of the days I have been learning MLS and IP/IPX flow masks :D

    ReplyDelete
  6. Ivan Pepelnjak31 March, 2011 22:21

    That's exactly one of my counter-arguments: MLS in any incarnation I've seen so far (be it from Cisco, Cabletron or anyone else) has failed miserably. Why should it work this time?

    However, per-flow MLS-like behavior is just one of the potential applications. You can also use OpenFlow for numerous other things, like downloading (static, pre-computed) L3 routing tables into switches or set up MPLS VCs across a SP network (MPLS-TP folks will love that :-E )

    ReplyDelete
  7. Damn, I'm going to have to change the whiteboard that has the speed of light permanently written in the top corner. What will the new speed of light be? :-D

    ReplyDelete
  8. Interesting take on this. I agree that what's in print seems a bit perplexing.

    What little I've seen on the topic that was coherent shrieked "academic" to me. Everything I've seen so far sounded like new control plane, maybe new routing logic in the routing entities. I like the comment already posted about development. Doing plugin code might make sense for that. The API for interfacing with hardware chipsets (e.g. for packet recognition) might get interesting too. Without low-level hardware interaction, you sure as heck aren't going to improve performance as far as I can see.

    I can see some benefits. To pick an example, Cisco Multi-Topology Routing seems to have been done once over lightly with no visible recent further progress (or I missed it). As I understood it, it sounded like matching QoS criteria and selecting forwarding table based on that. That's some pretty hardware-specific code I'd think. I suspect Cisco isn't pursuing it as a priority since it's a technical solution with increased complexity solving a problem that only a few have. Well, if it were open source, so to speak, then someone who cared could finish fleshing out that feature set (turning all routers into commodities?). How do the bugs get worked out? How does the code get rendered efficient? And can that happen quickly?

    Can you imagine troubleshooting hand-crafted site-specific code problems in a production network? Let's not even go there...

    One of the problems I see in IT right now is WAY too many combinations of web / middleware / DB, software, load balancers, routing designs, etc., etc. Everything is a one-off, and you have an exponential explosion of combinations even in one datacenter. Heck, I keep doing work at sites where various projects brought in every Server Load Balancing technique / vendor I've ever heard of -- and staff has to expend cycles learning about and supporting all of them! Now add routing variations? Yecch. I'd sure like a darn good reason before that happens!

    ReplyDelete
  9. Virtual Networking03 April, 2011 02:09

    Although, like you, I am skeptical that we'll ever see true 'inter-controllable' networks, I do think there are lots of powerful capabilities that are enabled via SDNs. Specifically, the kinds of multi-tenant networks that are going to be needed when infrastructure stretches across different organizations, IaaS and other service providers are going to need something fundamentally different that what's being offered today. On the flip side if all you've got to deal with is your own enterprise network, not sure SDN has much to offer.

    ReplyDelete
  10. Someone here sound scared.

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Ivan Pepelnjak, CCIE#1354, is the chief technology advisor for NIL Data Communications. He has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced technologies since 1990. See his full profile, contact him or follow @ioshints on Twitter.