TCP/IP is like a mainframe ... you can’t change a thing
Almost 30 years ago, I was lucky enough to work on one of the best systems of those days, VAX/VMS (BTW, it was able to run 30 interactive users in 2 MB of main memory), which had everything we’d wished for – it was truly interactive with hierarchical file system and file versioning (not to mention remote file access and distributed clusters). I couldn’t possible understand the woes of IBM mainframe programmers who had to deal with virtualized 132-column printers and 80-column card readers (ironically running in virtual machines that the rest of the world got some 20 years later). When I wanted to compile my program, I started the compiler; when they wanted to do the same, they had to edit a batch job, submit the batch job (assuming the disk libraries were already created), poll the queues to see when it completed and then open the editor to view the 132-column printout of compiler errors.
After a long discussion, I started to understand the problem: the whole system was burdened with so many legacy decisions that still had to be supported that there was nothing one could do to radically change it (yeah, it’s hard to explain that to a 20-year old kid full of himself).
Internet infrastructure is facing a similar problem today: we know we have to solve very tough problems (including address exhaustion and multihoming), but there’s not much being done. Petr Lapukhov attributed this sorry state of affairs to “short-term ROI requirements” in his comment to my “P2P traffic and the Internet, part 2” post, but the reality is more complex; there are at least three factors at work.
You cannot change the workstations’ TCP/IP stack. At least three host-based solutions have been proposed to solve the IP multihoming problem: shim6, SCTP and HIP. At least SCTP has been field-proven; it’s used between VoIP gateways to provide routing-independent multihoming and very fast failure detection. However, it’s not available on major client operating systems and thus has no chance to become widespread in the near future.
It took IPv6 almost 15 years to be somewhat properly implemented in client operating systems and one cannot expect a faster uptake for something that solves someone else’s problem (IP multihoming causes collective suffering of ISPs, not clients or content providers).
Just to give you an unrelated example: Internet Explorer 6 is still being used by 5% of the users although IE7 was launched four years ago (and Firefox took half of the IE market in the meantime).
We could have had a fighting chance if the designers of IPv6 had a broader charter; they were intentionally limited to layer-3 and thus could not address problems that would require changes to higher layers (and the TCPng group never got anywhere).
What’s in it for me? Secure BGP, SCTP and a number of other technologies have withered (or never took off) because the people gaining the most out of them were not the same ones that would have to make the investment.
SCTP would solve the BGP table explosion (ISP problem), but would have to be deployed on clients and servers. Tough call.
Likewise, Secure BGP solves the authentication-of-origin problem, but requires upgrades to ISP infrastructure, while the primary beneficiaries would be the content providers (secure BGP would stop intentional or accidental address space hijackings).
ROI requirements. Some of them are short-term (“I will not deploy IPv6 until there’s customer demand” stupidity), the others are more fundamental. In the “good old days” we did what had to be done for the “greater good of Internet”. Today, you can’t just go out and upgrade your gear; you have to follow processes and procedures (rightly so, after all you’re playing with critical infrastructure), justify the needs for your actions and budget your upgrades. Quoting “greater good” does not work very well with the C-level executives.
So, can we expect anything to change before the Internet implodes? It looks like there are only two viable scenarios:
Shared pain. If the pain is shared by all participants, there’s a slight chance the change will be implemented. Google and Facebook are implementing IPv6 to ensure new Internet users will get unhindered (and NAT-free) access to their services (and, more important, associated ads). Workstation OS vendors have implemented IPv6 because their OS would be dead in the water if it couldn’t be used to connect to the Internet. Some ISPs have to implement IPv6 if they want to earn money from new customers.
Limited change domain. It’s manageable to change something if the changes are limited to the ISP environment, can be deployed gradually and the beneficiaries are the ISPs deploying them. BGP-4 was rolled out very quickly after it became available in 1993. LISP is thus an ideal candidate for the next-generation Internet transport solution ... but it has a “minor” problem – it works best if it’s deployed on the CPE devices and the CPE devices don’t experience the problems LISP solves.
And the network has to adapt to cope, making it complex and less reliable.
However, as I said, many of the emerging technologies have less to do with "investment protection" than with the "we can't change our environment" reality. For example, TRILL, OTV, 802.1aq and the like would be completely unnecessary if the servers/clients/sysadmins would stop behaving like the IP addresses are cast in stone.
"we spent $$$ on it and now you telling us it sucks?! (CAPEX)" e.g. L2 based clustering or unroutable protocols
"what, we have to pay that dude ANOTHER $$$ to support new stuff?! (OPEX)" :) - e.g. changing plug-and-play behavior of Ethernet link to a routed network just to scale it.
Not to mention, the response from audience was "brutal" and the idea finally killed by Randy in his very own style. Hilarious (enjoyed it in some weird ways) :)