The Cost of Disruptiveness and Guerrilla Marketing

A Docker networking rant coming from my good friend Marko Milivojević triggered a severe case of Deja-Moo, resulting in a flood of unpleasant memories caused by too-successful “disruptive” IT vendors.

Before moving on, please note that the following observations were made from my outsider perspective. If I got something badly wrong, please correct me in a comment.

Imagine you’re working for a startup creating a cool new product in the IT infrastructure space (if you have an oversized ego you would call yourself “disruptive thought leader” on your LinkedIn profile) but nobody is taking you seriously. How about some guerrilla warfare: advertising your product to people who hate the IT operations (today we’d call that Shadow IT).

Every now and then that trick results in an unexpected wild success, and a horrendous nightmare for everyone in IT Operations who has to support the former-guerilla environment that was dumped on them. Most probably that environment (or product) was never designed for large-scale use, and was never fixed or redesigned to adjust to the new use cases… but of course the now-successful vendors rarely care about those unintended consequences.

Novell Netware

The first vendor using this trick that I had to deal with was Novell with its Netware product. Instead of trying to connect to all sorts of Enterprise Systems (like IBM, CDC, DEC, Univac, Burroughs, Honeywell…), they created this standalone thingy that could run on anything slightly better than barbed wire (remember ARCnet?) and started selling it to department that needed simple file sharing capabilities.

All of a sudden they became hugely successful (until Microsoft started offering the same functionality bundled with Windows servers), and we were asked to extend a protocol that was designed to be a just-good-enough kludge running over a single coax cable over WAN networks, and to build huge networks running Novell IPX. The whole thing was so bad that Cisco made good money with software features like service advertisement (SAP) filters (SAP filters became mandatory items in networking RFPs in those days).

Novell tried to fix things years later with NLSP (a link-state protocol for IPX/SAP), but it was too little too late. Most everyone moved to TCP/IP at that point. Even though we built numerous Novell IPX networks in those days, I’ve never seen NLSP deployed.

VMware Networking

Imagine you have this cool virtualization idea, but the big guys don’t take you seriously… so you decide to fly under the radar and make your product as invisible as possible, trying to persuade developers that it’s the next best thing since sliced bread.

Unfortunately your product has to work with existing networking infrastructure, and while you can use multiple MAC and IP addresses on the same interface card (nobody was serious about IP or MAC security in those days), anything else would complicate your life because it would require your customers to talk with their internal IT infrastructure teams.

End result: bridging without STP, port channels without LACP, no LLDP…

Even worse: because VMware tried to support every possible lunacy (like Microsoft NLB), there was no way they could reliably figure out the virtual machine IP address, so when ESXi virtual switch has to send a MAC frame on behalf of the virtual machine, it sends something that has no IP address in the payload. Welcome to the famous RARP kludge.

Notes:

  • There are two ways to deal with enterprise lunacies. You either try to accommodate every stupidity anyone ever made (the usual Enterprise IT Vendor way), or tell the customers they can take their business somewhere else (AWS/Azure/GCP way). Guess which one doesn’t work in guerrilla marketing approach ;)
  • For whatever reason (unless I’m totally mistaken in which case please fix my ignorance) most other hypervisors, including Hyper-V, can spell IP and send gratuitous ARP instead of RARP packets after a MAC move event.

But wait, there’s more: a few years later VMware created this awesome technology that allows the administrators to move a running virtual machine between servers, but it only works if there’s a layer-2 network between the source and destination hypervisor. Ignoring the laws of physics, VMware’s marketing department started selling that technology as multi-site disaster avoidance panacea, probably indirectly triggering more data center meltdowns than any other crazy idea out there.

Years later, VMware tried to fix the mess they created with VXLAN and later with NSX. Ask me how that turned out in about a decade ;)

Docker Networking

Fast-forward a decade. At least some public cloud providers have figured out that the flat earth theory doesn’t work well, and started enforcing rigorous one-IP-per-VM and no-nasty-tricks rules… and there’s a startup that’s trying to build another disruptive technology on top of Linux containers and the “unicast IP forwarding only” limitations imposed by those public cloud providers.

Welcome to Docker Networking, a morass full of NAT rules, complex iptables, overlay virtual networking, internal and external IP addressing… At least in this particular case, I don’t think they could have done anything more sensible given the constraints and the desire to have an independent TCP/IP stack within each container.

The really sad part: it would be really easy to implement Docker Networking with IPv6, a /64 prefix per host, and ACLs to enforce limited inter-container communication.

Alas, we’re way past that point. For example, one of the overlay virtual networking implementations runs in a user-mode program connected to individual containers through tap interfaces. What could possibly go wrong with that? ;)

3 comments:

  1. TAP interfaces work on layer 2 so there's a potential for loops. Or what was your thought?
    Replies
    1. There’s that... there’s also the problem of going from user space to kernel (Linux bridge), back to user space (VXLAN encapsulation process) back to kernel... exactly what you need when you bought a CPU with too many cores ;)) Or maybe it’s not so bad... some more details here: https://machinezone.github.io/research/networking-solutions-for-kubernetes/
  2. For docker networking, "have an independent TCP/IP stack within each container" and "implement Docker Networking with IPv6" is actually what these software engineers are doing to the networking reversely. Both make the networking part better, but will bring issues to the software side.
Add comment
Sidebar