Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Building network automation solutions

9 module online course

Start now!
back to overview

Maybe It's Time We Start Appreciating Standards

A friend of mine sent me a short message including…

There is a number of products that recently arrived or are coming to market using group encryption systems for IP networks, but are (understandably) not using IPsec.

… which triggered an old itch of mine caused by the “We don’t need no IETF standards, code is king” stupidity.

The Young Turks promoting the “code is king” sentiment are probably too young to remember the days when every computer vendor had it’s own networking stack, when some of them had their own LAN implementations, when we had three incompatible encapsulation formats on a single LAN technology, when it was nearly impossible to interconnect equipment from multiple vendors, when even Unix systems weren’t exactly compatible…

The only reason they can connect to a WiFi hotspot in a nearby coffee shop and start spouting opinions is because someone spent decades developing data-link-layer standards (Ethernet and WiFi), network- and transport layer standards (IP and TCP), application-layer standards (HTTP), networking control plane (BGP, IS-IS, OSPF), and a plethora of other things that make Internet work (DNS, DHCP…). Sitting on top of all that infrastructure and claiming “we don’t need no stinking IETF” seems… shortsighted?

Don’t get me wrong - I hate the IETF process as much as anyone else (it took years to agree on documenting best practices for securing BGP), but the rounds of endless arguing sometimes have a positive side-effect: we might avoid the I cut it three times and it’s still too short behavior.

On the other hand, there are people participating in standards bodies for the sake of participating, job security, blocking competing efforts, or spending time in interesting locations with other people’s money. History is full of “standards” created because a bunch of people had nothing better to do (see also: OSI model battle scars), and IETF seems to be heading in the same direction… or as a friend of mine recently put it:

I hate the IETF not just because the process is long, and painful, but because it’s filled with draft writers, most of whom wouldn’t know a good implementation if it hit them in the face. It’s all draft by committee and we know where that goes.

A long time ago IETF had a safeguard against half-baked (and frivolous) ideas: they wanted to see at least two independent implementations before something could became a Proposed Standard RFC. Unfortunately that requirement was lifted a long time ago quoting the need for timeliness… but how do you know a specification is good enough if only a single group ever tried to implement it? How can your test cases provide a good-enough coverage if you never encounter another implementation in the wild… and why should the future generations deal with a stack of obsoleted RFCs just because nobody stress-tested the ideas before the draft was baked into an RFC?

For a perfect example of that, research the history of IPv6 extension headers and RA Guard.

In the end, we need both: a thoughtful process grounded in “rough consensus, working code” that results in usable and useful standards… and fortunately, at least some working groups still believe in the old way of doing things, so we might have interoperable implementations instead of single-group walled gardens for a little bit longer.

Before you write a comment telling me open source will save the world, do read the Shades of Lock-In. You can be locked into a particular broken implementation no matter whether you can see the source code or not… and I doubt anyone would be willing to reverse-engineer the code to write the specs needed for an interoperable implementation.

This blog post has been significantly enhanced based on feedback from David Gee, Nick Buraglio and Dinesh Dutt. Thank you!

Please read our Blog Commenting Policy before writing a comment.

6 comments:

  1. Do the SD-WAN Kits need standards? Isn't beautiful to have own standards to built such kits and sell only own 'lego blocks' compatible with own orchestrator...?

    On the other hand when you buy such kit you have only one vendor to blame for all issues...

    ReplyDelete
  2. "On the other hand when you buy such kit you have only one vendor to blame for all issues..." << and no way to escape unless you build a wholly new parallel network. Vendor dreamland ;))

    ReplyDelete
  3. I think there is no general rules ("it depends" to mention your famous words). Same as vendor "lock-in") - broader context is important.

    I think it's crucial to see business perspective, expected lifetime of the solution, sometimes it's easier to build "new parallel network", cost of support, etc.

    ReplyDelete
  4. Good article. As an anecdote: John Moy used to tell you to take your slides of the projector (yeah, the beamers of old ;-) if you couldn't confirm that you have already written code & ran it for what you're presenting. Point was not that "code is the standard", point was that the quality of what you present will be pobably too sketchy and full of holes if an initial implementation did not force you to think through all the involved issues. My personal take on what's going is that ISO put process before substance and was too water-logged to move @ speed, IETF went the opposite direction and that was a big advantage when things we did were small and not that important as in "heh, video over internet, why?, heh, 911 on IP, who's crazy?". Now that IP grew up massively the sins of the fathers need to be paid, some of them can be fixed (two implementations interop'ed in routing group please before standards track) and some cannot really as in "let's patch back a security architecture into IP". My usual mildly acerbic 2c ;-)

    ReplyDelete
  5. We can afford for some non-standard solutions thanks to existing standards. For example, we can use different SD-WAN solutions thanks to standard routing protocols which can tie them together or place to the Internet edge. Avoiding standards is a bad approach but also waiting for a standard can slow down a new innovation too much. So we need both, in my opinion. No-standards as predecessors of standards.

    ReplyDelete
  6. Arrived from VMworld Forum with in my mind a sentence like this: " we don't need NNI interface specification, we use the same approach of dev team, release the code and bum..." I had the same feeling

    ReplyDelete

Constructive courteous comments are most welcome. Anonymous trolling will be removed with prejudice.

Sidebar