Going Back to the Mainframes?

25 years ago when I started my networking career, mainframes were all the rage, and we were doing some crazy stuff with small distributed systems that quickly adapted to topology changes, and survived link, port, and node failures. We called them routers.

Yes, we were crazy and weird, but our stuff worked. We won and we built the Internet, proving that we can build networks bigger than any mainframe-based solution could ever hope to be.

A few years later, following the explosions of minicomputers, x86-based servers and server virtualization, everyone started using the same architectural concepts and design paradigms. Look at any large modern application – you’ll find a scale-out architecture of distributed systems designed for failure of any individual component, and powered by orchestration tools (because that’s the only way to improve the engineer-to-component ratio).

Massimo Re Ferre wrote a great high-level overview of how native cloud applications should look like. It’s a must-read, as is the Twelve-Factor App web site.

Unfortunately most of the networking world got stuck in a box-at-a-time mentality (with a few notable exceptions), and some people think the solution to that problem is to go back to the mainframe world and deploy centralized controllers with dumb forwarding components. I wonder whether they ever bothered to look at what we’ve learned in the last 30 years, and where everyone else in the IT is going based on that experience.

The best comment I heard from an old networking engineer while explaining how OpenFlow works: “That’s SNA. We’ve seen it fail. We don’t have to repeat the experiment.

Latest blog posts in Distributed Systems series

5 comments:

  1. Love the SNA quote. I usually equate OF to other switching technologies that worked similar (ATM, FR, etc.) but it is, fundamentally, the same thing.
    I am more interested in trying to centralize the management plane and use standardized protocols and interfaces (NETCONF, YANG, Thrift, etc.) to inject config rather than completely messing with the CP.
  2. Is this what happen when young dudes try to take over the world? They have to repeat all the same mistakes?
  3. It was the network folks who perfected the art of distributed system's while app folks were monolithic till few yrs back. Apps have become so much distributed that network started to fall behind and started to focus on the In_the_Box_Mentality. These are obvious from Network services which are more or less centralized. To add to the problem, network guys invited more protocols to make things really complex. Now the clock is ticking back!!.
  4. I don't believe these technology transitions have much to do with the technology, but rather more to do with trying to break the stanglehold that one or two vendors have on the market. It was IBM back in the good old mainframe days and it is pretty much Cisco now, in the networking arena anyway. The other driver is always politics and who owns what, who controls what and so on.
    All the virtualisation technology was invented by IBM and used on mainframes years ago, JCL decks and scripting languages like REXX was used as the orchestration glue to run the systems. The hardware is smaller and the languages have changed, however, concepts, problems etc. are still the same.
  5. Yep, I also come from the mainframe era 25 years ago. Subsequent to that I have been through two other central controller like implementations - ATM LANE and Infiniband. Both failed. Distributed control planes will prevail again but the industry will continuously try to re-invent and rebrand failed technology.
Add comment
Sidebar