One of the oft-repeated messages of the Software-Defined Pundits is “Standard bodies are broken, (open) source code is king”… and I’d guess that anyone who was too idealistic before being exposed to how the sausage is being made within IETF has no problems agreeing with them. However…
One of the benefits of a standards process is that (sometimes) you have to think before you start coding your solution, and while IETF often acts like Internet Vendor Task Force, the IETF process at the very minimum gets your ideas exposed to people with different perspectives or opposing ideas, potentially resulting in either better standards or religious debates (DHCPv6 versus RDNSS or /48-for-everyone being my favorites).
Nothing like that could be said for let’s-get-this-coded approach. I’ve seen large open-source projects that were fantastic from both architectural and implementation perspective, and other open-source projects that were a total mess that made spaghetti Bolognese look as ordered as diamond crystal lattice. How about early Neutron plugins that used CLI commands to install flows periodically pulled down from MySQL database into local Open vSwitch.
Then there’s the “code is self-documenting” approach promoted by people who don’t think documenting their work is important (there are obvious exceptions, like the RSX-11M operating system, where half of the source code real estate was devoted to comments explaining in high-level terms what’s going on in the code). However, having “self-documenting” source code potentially available at your fingertips is not exactly helpful when your LibreOffice client fails to format simple Word document correctly… and there aren’t many users who have the knowledge necessary to fix the problem (let alone the time).
Networking is no different. Even if you get the source code of a broken product, most networking teams have no chance to fix it… and don’t forget that “open” networking (for some value of “open”) is all about having multiple interoperable implementations. While standards used to get those implementations might be imperfect, reverse-engineering someone else’s code to get the protocol on the wire is as wrong as it can get.
When you’re trying to troubleshoot a network-down problem at 3AM on Sunday night digging through the code is not the answer – you MUST understand how things work, and the only way to get there is proper documentation.
Believers in open source will obviously disagree with me, but as long as you don’t have a documented protocol, and only a single implementation, you might as well call that code proprietary (see definition #4 in this Wiktionary article).
Oh, and even if you don’t believe in multiple interoperable implementation, you might still want to troubleshoot the stuff that’s going on between network nodes with something like Wireshark, and having no protocol specification makes writing the decoder an extremely fun task.
Honestly, I don’t have a clear answer. IETF is slow (and cumbersome for someone who is not paid to sit on a standards body and just tries to get the job done), and relying on open-source code alone can result in a nicely documented product or a hard-to-fix mess (remember OpenSSL?). Is there a third way? Share your opinion in the comments!