IT World Canada has recently published an interesting “Disband the ITU's IPv6 Group, says expert” article. I can’t agree more with the title or the first message of the article: there is no reason for the IPv6 ITU group to exist. However, as my long-time readers know, that’s old news ... and the article is unfortunately so full of technical misinformation and myths and that I hardly know where to begin. Trying to be constructive, let’s start with the points I agree with.
IPv6 was designed to meet the operational needs that existed 20 years ago. Absolutely true. See my IPv6 myths for more details.
ITU-T has spun up two groups that are needlessly consuming international institutional resources. Absolutely in agreement (but still old news). I also deeply agree with all the subsequent remarks about ITU-T and needless politics (not to mention the dire need of most of ITU-T to find some reason to continue existing). That part of the article should become a required reading for any standardization body.
And now for (some of) the blunders:
IPv6 was a geek response to IAB’s adoption of CLNP. First of all, I don’t remember IAB every adopting CLNP. It did adopt IS-IS (which is the routing protocol used by CLNP) – an excellent decision that resulted in the only standardized multi-protocol IGP available today (that’s why all Shortest Path Bridging proposals including TRILL and 802.1aq use IS-IS). I can only hope that the loss of distinction between IS-IS and CLNP was a result of the editorial process, not a statement by the author (in which case I would have to start doubting his expertise).
Next, one of the proposals for next-generation IP (TUBA – TCP and UDP over Big Addresses) was indeed based on CLNP, but it was decided to move forward in a way that was more familiar to the IP world (as you know, regardless of window dressing, IPv6 is really IPv4 with minor fixes and longer addresses).
To put my statements in perspective: I was a devote supporter of TUBA and was deeply disappointed when IETF decided to reinvent the wheel instead of using a protocol that was already available, implemented by numerous vendors and field-tested. However, with 20 years of layer 8-10 experience behind me, I can understand that the totally different routing paradigm of CLNP and its variable-length addresses scared the IP gurus ... and it’s always nicer to reinvent the wheel and get the credit for the shiny new toy than to adopt someone else’s work.
Last, I wouldn’t use a pop book that was published 15 years after the events as a reliable source. When I was stupid enough to believe an “expert” who explained in vivid details what a catastrophe FTP is, numerous readers pointed out that FTP was, in fact, a well designed tool for the job it was supposed to be solving.
When IPv6 was finally adopted, many in industry made it clear they were not going to use the protocol. Of course, what else would you expect? It was much simpler to create private IP addresses and add NAT on top of IPv4 (all of which was solved within the network layer) than to wait for the rest of the IT industry to implement new protocol stack on all the hosts, persuade all the developers to change the way they open sockets (using the broken API) and fix all the outstanding applications. It took OS vendors more than a decade to finally implement IPv6 properly and several more years for the off-the-shelf applications to become reasonably IPv6-aware. Who knows what’s lurking in hidden depths of home-baked enterprise code?
After 16 years of evangelization, the "father of the Australian Internet," Geoff Huston, who became tired of the endless IPv6 hype, demonstrated in 2008 that only 0.4 per cent of the TCP/IP traffic was IPv6. What else would you expect? See above. And, just as an aside, if someone would truly invest 16 years in evangelizing IPv6, we might have seen some results. The sad fact was that IPv6 was largely forgotten after everyone implemented RFC 1918 and NAT.
Governments ... seem obsessed with continuing to drive IPv6 as some kind of panacea. It would be really good to hear from the same self-professed expert what his solution to the IPv4 public address depletion is. Carrier-grade NAT? CLNP? X.25?
Unfortunately, those few government agencies that can see beyond the next elections had to start the IPv6 push because the whole IT industry focused on quarterly results was in a state of denial (not dissimilar to the car industry’s handling of global warming). Without the government-sponsored push, there would be little (if any) IPv6 deployment today and we would be even more unprepared for the IPv4 address depletion.
20 years ago governments pushed CLNP because everyone believed it was the right protocol (IP was a geek toy). History proved them wrong. Lesson learned: don’t trust a protocol designed by a committee? Now they are pushing IPv6 because the only alternative is the breakdown of the end-to-end Internet.
LISP ... provides a compelling direction that IPv4 or IPv6 lack. LISP is a necessary hack because we can’t change what should truly be changed: the end hosts. I like LISP and the approach it takes to solving multi-homing and traffic engineering problems, but all those problems wouldn’t exist if someone would have taken the time to fix the broken TCP/IP architecture. What we’re doing today is introducing another layer of indirection at the network layer because we can’t change the things that are really broken.
A trusted implementation of LISP will provide a level of attribution and routing security that does not exist in the present architecture. Wishful dreams. To start with, LISP relies on underlying IPv4 or IPv6 global transport, which will be still implemented with current tools (BGP) and will thus stay as “secure” as it is today. Second, if we would be truly interested in routing security, we could have implemented secure BGP a long time ago, but it didn’t happen for a simple reason: those people that would benefit from it (content providers) were in no rush to pay those people that would have to implement it (ISPs).
To give you another example: it took almost a decade to develop and implement DNSSEC, even though we’ve experienced some very interesting DNS-based attacks in the meantime. Why would LISP be any different?