Source Code Is Not Standards

One of the oft-repeated messages of the Software-Defined Pundits is “Standard bodies are broken, (open) source code is king”… and I’d guess that anyone who was too idealistic before being exposed to how the sausage is being made within IETF has no problems agreeing with them. However…

One of the benefits of a standards process is that (sometimes) you have to think before you start coding your solution, and while IETF often acts like Internet Vendor Task Force, the IETF process at the very minimum gets your ideas exposed to people with different perspectives or opposing ideas, potentially resulting in either better standards or religious debates (DHCPv6 versus RDNSS or /48-for-everyone being my favorites).

Nothing like that could be said for let’s-get-this-coded approach. I’ve seen large open-source projects that were fantastic from both architectural and implementation perspective, and other open-source projects that were a total mess that made spaghetti Bolognese look as ordered as diamond crystal lattice. How about early Neutron plugins that used CLI commands to install flows periodically pulled down from MySQL database into local Open vSwitch.

Then there’s the “code is self-documenting” approach promoted by people who don’t think documenting their work is important (there are obvious exceptions, like the RSX-11M operating system, where half of the source code real estate was devoted to comments explaining in high-level terms what’s going on in the code). However, having “self-documenting” source code potentially available at your fingertips is not exactly helpful when your LibreOffice client fails to format simple Word document correctly… and there aren’t many users who have the knowledge necessary to fix the problem (let alone the time).

Networking is no different. Even if you get the source code of a broken product, most networking teams have no chance to fix it… and don’t forget that “open” networking (for some value of “open”) is all about having multiple interoperable implementations. While standards used to get those implementations might be imperfect, reverse-engineering someone else’s code to get the protocol on the wire is as wrong as it can get.

When you’re trying to troubleshoot a network-down problem at 3AM on Sunday night digging through the code is not the answer – you MUST understand how things work, and the only way to get there is proper documentation.

Believers in open source will obviously disagree with me, but as long as you don’t have a documented protocol, and only a single implementation, you might as well call that code proprietary (see definition #4 in this Wiktionary article).

Oh, and even if you don’t believe in multiple interoperable implementation, you might still want to troubleshoot the stuff that’s going on between network nodes with something like Wireshark, and having no protocol specification makes writing the decoder an extremely fun task.

Now what?

Honestly, I don’t have a clear answer. IETF is slow (and cumbersome for someone who is not paid to sit on a standards body and just tries to get the job done), and relying on open-source code alone can result in a nicely documented product or a hard-to-fix mess (remember OpenSSL?). Is there a third way? Share your opinion in the comments!

8 comments:

  1. Can standardization and speed go-together ?
  2. Speaking for routing area - things are getting better, design teams are doing great job in providing timely and high quality output.
    It is not easy though - for most people IETF is in addition to their daily jobs, getting things done requires discipline... I'm chasing
  3. Interesting, Jeff. I had the impression that IETF was a full-time gig for many folks. Essentially a way for vendors to make sure their interests are represented.
  4. Third way: Literate programming as defined by Donald Knuth... :-)
    But what would enforce it?
  5. "... you MUST understand how things work ..."
    Even more important for troubleshooting and fixing the problem is to understand how things MUST work, what is the proper behavior on the wire. And that's where standard comes handy. But one must be able to properly interpret normative language to understand and implement the standard the right way.
  6. Most open source projects start with poor documentation. I wouldn't systematically dismiss the whole concept just because of a rough start and some bad implementations. I don't think ospf had a good start either and that was with the IETF. Good open source projects will grow and mature if they get a sufficiently large community.

    One thing is certain though. Vendors will have a hard time protecting their interests with open source projects, although that won't stop them from trying.

    in my opinion both can cohexist. Anything that wins the mass adoption race deserves to be a standard. It's called de-facto standard.
  7. I believe in running code and rough consensus. When all there is, is consensus, it can be the consensus of the insane.

    There are some things that are just too complex and fast-moving to standardize, and the best you can hope for is a reference implementation.
  8. I think the purpose of standards is to allow customers to mix and match vendors knowing that if the vendors are standards complaint then the boxes will iterop. If there was a requirement on every standard, that is adopted, to be support by open-source reference implementation then it would really help encourage adoption and also avoid vendor lock in.
    We see a lot of standards where there are only proprietary implementations and customers have no choice but to go to one of the big vendors to solve their problem.
Add comment
Sidebar