Maybe It's Time We Start Appreciating Standards
A friend of mine sent me a short message including…
There is a number of products that recently arrived or are coming to market using group encryption systems for IP networks, but are (understandably) not using IPsec.
… which triggered an old itch of mine caused by the “We don’t need no IETF standards, code is king” stupidity.
The Young Turks promoting the “code is king” sentiment are probably too young to remember the days when every computer vendor had it’s own networking stack, when some of them had their own LAN implementations, when we had three incompatible encapsulation formats on a single LAN technology, when it was nearly impossible to interconnect equipment from multiple vendors, when even Unix systems weren’t exactly compatible…
The only reason they can connect to a WiFi hotspot in a nearby coffee shop and start spouting opinions is because someone spent decades developing data-link-layer standards (Ethernet and WiFi), network- and transport layer standards (IP and TCP), application-layer standards (HTTP), networking control plane (BGP, IS-IS, OSPF), and a plethora of other things that make Internet work (DNS, DHCP…). Sitting on top of all that infrastructure and claiming “we don’t need no stinking IETF” seems… shortsighted?
Don’t get me wrong - I hate the IETF process as much as anyone else (it took years to agree on documenting best practices for securing BGP), but the rounds of endless arguing sometimes have a positive side-effect: we might avoid the I cut it three times and it’s still too short behavior.
On the other hand, there are people participating in standards bodies for the sake of participating, job security, blocking competing efforts, or spending time in interesting locations with other people’s money. History is full of “standards” created because a bunch of people had nothing better to do (see also: OSI model battle scars), and IETF seems to be heading in the same direction… or as a friend of mine recently put it:
I hate the IETF not just because the process is long, and painful, but because it’s filled with draft writers, most of whom wouldn’t know a good implementation if it hit them in the face. It’s all draft by committee and we know where that goes.
A long time ago IETF had a safeguard against half-baked (and frivolous) ideas: they wanted to see at least two independent implementations before something could became a Proposed Standard RFC. Unfortunately that requirement was lifted a long time ago quoting the need for timeliness… but how do you know a specification is good enough if only a single group ever tried to implement it? How can your test cases provide a good-enough coverage if you never encounter another implementation in the wild… and why should the future generations deal with a stack of obsoleted RFCs just because nobody stress-tested the ideas before the draft was baked into an RFC?
In the end, we need both: a thoughtful process grounded in “rough consensus, working code” that results in usable and useful standards… and fortunately, at least some working groups still believe in the old way of doing things, so we might have interoperable implementations instead of single-group walled gardens for a little bit longer.
This blog post has been significantly enhanced based on feedback from David Gee, Nick Buraglio and Dinesh Dutt. Thank you!
On the other hand when you buy such kit you have only one vendor to blame for all issues...
I think it's crucial to see business perspective, expected lifetime of the solution, sometimes it's easier to build "new parallel network", cost of support, etc.