Response: Any-to-Any Connectivity in the Internet

Bob left a lengthy comment arguing with the (somewhat black-and-white) claims I made in the Rise of NAT podcast. Let’s start with the any-to-any connectivity:

From my young millennial point of view, the logic is reversed: it is because of NATs and firewalls that the internet became so asymmetrical (client/server) just like the Minitel was designed (yes, I am French), whereas the Internet (and later the web, although a client/server protocol, was meant for everyone to be a client and a server) was designed to be more balanced.

Let’s start with the early Internet. It had no peer-to-peer applications. It connected a few large computers (mainframes) that could act as servers but also allowed terminal-based user access and thus ran per-user clients.

The client/server dichotomy became more evident as we started connecting low-end machines (IBM PCs and the like) to IP networks. The low-end machines did not have enough resources to be (reasonably good) servers1, and once you could run Internet applications on your personal computer and drag an email into a trash bin, terminal access quickly seemed bizarrely outmoded. Even though every device was an IP host, the split into primarily clients and mostly servers IP hosts happened without any pressure from the network side.

Of course, we always had people running web- and SMTP servers2 in their basements, but they were always a tiny (but very vocal) minority.

It’s also worth mentioning that all networking technologies3 (apart from IBM SNA) available in the early 1990s used a single address space and provided any-to-any end-to-end connectivity. IP wasn’t either unique or better than the others; it just happened to have a big enough address space and a global address allocation mechanism. The sacred cow of any-to-any connectivity was created primarily as an argument for the almost infinite advantage of IPv64 after we had no other option but to start using NAT.

At the same time, we had large country-wide DECnet networks, but their 16-bit address space inherently limited their maximum size. There were also attempts to have a global registry of Novell IPX networks, but they never got far.

It’s also worth noting that most residential customers didn’t care at all about those technical details (as long as they could read emails and browse the web), and large organizations viewed NAT as a welcome demarcation point between internal and public networks. The only people preaching the benefits of unlimited, any-to-any connectivity were the IPv6 True Believers.

The rise of NAT was thus not an evil conspiracy by Big Tech or the cause for the client-server asymmetry. It was a pragmatic consequence of the fact that most paying customers accessed Internet services from clients that were not also servers while IETF was dragging its feet5, suffering from the not invented here syndrome, and throwing all sorts of crazy ideas into the kitchen sink called IPv6 instead of reusing an already-deployed protocol as the basis for the next-generation Internet.


  1. Typical personal computers in those days had a 4.77 MHz CPU, 640 KB of RAM, and 10 MB disks. However, apart from a few niche applications like Minecraft, very few people run publicly accessible servers on laptops with 4 GHz CPUs, 16 GB of RAM, and 1TB of disk space. ↩︎

  2. We ran sendmail with UUCP on an MS-DOS machine with 640K or RAM as the core country-wide email node for a while, but that’s not something for the faint-hearted. ↩︎

  3. That I was able to touch ↩︎

  4. IPv6 is full of sacred cows and religious opinions. For example, it took IETF over 20 years to publish an RFC admitting that the generic extension headers don’t work. It still refuses to acknowledge that it failed to solve IPv6 small site multihoming, and people heavily influencing the development of a widespread mobile operating system are still on a crusade against DHCPv6↩︎

  5. After tons of coordination, major web properties agreed to enable IPv6 on their websites for one day in 2011, or 16 years after the Recommendation for the IP Next Generation Protocol RFC was published. In the next 14 years, we went from almost zero to 45% IPv6 adoption in the environment in which every widespread operating system has a high-quality IPv6 stack and every browser implements happy eyeballs↩︎

4 comments:

  1. 4.77 MHz

    Replies
    1. The fact that I immediately realized what you had in mind is scary ;) Fixing...

  2. "....and large organizations viewed NAT as a welcome demarcation point between internal and public networks." Yep! Slammer & CodeRed (etc, etc) got everyone's attention.

  3. > It’s also worth noting that most residential customers didn’t care at all about those technical details

    They don't care about engineering, but they sure do create support tickets about broken P2P applications, such as Xbox/PS gaming applications, broken VoIP in gaming lobbies, failure of SIP client to punch through etc. All these problems don't exist on native routed (and static) IPv6.

    In order for P2P to work as close as possible to routed IPv6 in NATted IPv4, we had to deploy a bunch of workarounds such as EIM-NAT to allow TCP/UDP P2P punching to work both ways, we had to allow hairpinning on the CGNAT device to allow intra-CGNAT traffic to work between to CGNAT clients, as TURN can only detect the public-facing IP:Port, hairpinning allow 100.64.0.0/10 clients to talk to each other over the CGNATted public IP:Port.

    All this complexity, because NAT is evil, and even if argued otherwise, the end-result is, it leads to centralisation of content/traffic on a few hyperscale organisations — how's this doing favour for the future of humanity, free speech et al.? I don't know.

    Replies
    1. > All these problems don't exist on native routed (and static) IPv6.

      Wrong. Every decent home router is also a one-way stateful firewall. You just replaced NAT hole punching with firewall hole punching.

      > the end-result is, it leads to centralisation of content/traffic on a few hyperscale organisations

      Wrong. The centralization of content is a side effect of our laziness (a topic for another blog post ;)

    2. > Wrong. Every decent home router is also a one-way stateful firewall. You just replaced NAT hole punching with firewall hole punching.

      Not sure what's 'wrong' here: 1. Firewall hole punching only involves STUN, and that's it. We move on with our lives. 2. NAT hole punching in the absence of EIM-NAT+Hairpinning (aka 99% of ISPs) breaks P2P, resorting to TURN-relaying, and this is what creates customer support tickets. 3. Whereas with firewalled IPv6, once STUN does its job, there's no customer support tickets, because P2P will just work.

      Most popular end-user software applications these days, are coded with STUN libraries from day one anyway, a developer doesn't need to be a software network engineer to be able to call upon some GitHub popular entries for STUN. It's crazy how ubiquitous STUN has become in our daily lives, yet invisible to most people, unless they PCAP their home and office networks.

      I myself use stateful firewall in my own home network as well (with PIA IPv6 space), I can confirm with PCAPs, that P2P software such as BitTorrent (nothing wrong with seeding Debian torrents!), VoIP etc, successfully establish P2P over STUN, no TURN involvement. It is true P2P, where src and dst /128 IPs belongs to each endpoint objectively.

      What I'm trying to say is: The goal of modern-day IPv6 should be to get rid of TURN, ensuring P2P can work either natively (rare) or with STUN-assist (99% of the time), in either case, it's P2P and not TURN-relayed.

      What CAN be a blog post topic (that I myself may write about in the future) is, basic IPv6 firewall guidelines and logic to preserve SOLICITED (very important keyword) P2P connectivity.

      > Wrong. The centralization of content is a side effect of our laziness (a topic for another blog post ;)

      Well, NAT is laziness (because who wants IPv6), and it goes hand-in-hand. This is less of a technical topic and more of a subjective layer 8 topic, that I'm sure, will be a never ending debate :)

  4. Do not forget that CGNAT, but even simple NAT, might provide some kind of privacy, since multiple end nodes will be merged into a random IP address and port number.

    Getting a non-repudiation requires an expensive detailed logging of the mappings and private address allocations, plus some correlation analysis. So it is not practical on the large scale and such investigations are exercised only if it is really enforced by the state. Even for that you need large budgets at telcos and only done properly if the state provides additional financing.

    NAT is also a kind of simplified firewall where it reduces the attack surface by filtering out some basic illegal connection attempts with no additional configuration management needs.

Add comment
Sidebar