25 years ago when I started my networking career, mainframes were all the rage, and we were doing some crazy stuff with small distributed systems that quickly adapted to topology changes, and survived link, port, and node failures. We called them routers.
Yes, we were crazy and weird, but our stuff worked. We won and we built the Internet, proving that we can build networks bigger than any mainframe-based solution could ever hope to be.
A few years later, following the explosions of minicomputers, x86-based servers and server virtualization, everyone started using the same architectural concepts and design paradigms. Look at any large modern application – you’ll find a scale-out architecture of distributed systems designed for failure of any individual component, and powered by orchestration tools (because that’s the only way to improve the engineer-to-component ratio).
Unfortunately most of the networking world got stuck in a box-at-a-time mentality (with a few notable exceptions), and some people think the solution to that problem is to go back to the mainframe world and deploy centralized controllers with dumb forwarding components. I wonder whether they ever bothered to look at what we’ve learned in the last 30 years, and where everyone else in the IT is going based on that experience.