When I started working with Cisco routers in late 1980s all you could get were devices with a dozen or so ports, and CPU-based forwarding (marketers would call it software defined these days). Not surprisingly, many presentations in Cisco conferences (before they were called Networkers or Cisco Live) focused on good network design and split of functionality in core, aggregation (or distribution) and access layer.
What you got following those rules were stable and predictable networks. Not everyone would listen; some customers tried to be cheap and implement too many things on the same box… with predictable results (today they would be quick to blame vendor’s poor software quality).
Few years later, traditional telecoms discovered Internet and wanted to ride what they thought was highly profitable wave of new technology (destroying its profitability on the way… but that’s another story). Every single router vendor thought they discovered a bonanza, started creating boxes targeted at service providers, and we quickly got incredibly complex monstrosities with dual control planes, distributed linecard-based forwarding and a gazillion of kludges needed to implement non-stop forwarding.
I found it really hard to grasp why large organizations wouldn’t understand the need for proper network design, and would insist on vendors creating increasingly complex stuff (there’s a reason vendors keep doing that - because someone totally disconnected from reality believes their PowerPoint story so much that they’re willing to buy the complex stuff). It took me years to figure out a few potential root causes including:
- Mindset - telecoms were always buying one-box-per-PoP and they couldn’t grasp the need to build your own switching infrastructure (disaggregated would be a buzzword to use these days);
- Lack of knowledge - designing your own switching infrastructure requires competent engineers, and many organizations simply weren’t willing to invest into them and deal with them afterwards. Buying complex stuff that was promised to be easy to use sounded like a better deal. We’re no different these days or Gartner wouldn’t publish reports on investing in premium people instead of premium vendors.
- Shifting responsibility and blame - If your own design fails, you have no one else to blame. If an overly-complex vendor box fails even when it was obvious from the get-go that it’s too complex for its own good, you blame the vendor not your own incompetence in selecting the solution… not to mention people who design their networks based on vendor white papers, ignore all sane advice (including advice from their system integration partners and vendor engineers), and then blame everyone else but themselves when the whole network melts down.
Have I missed something? Please write a comment!
Are we doing better these days? No. If anything, things got worse with the advent of software-defined unicorn poop and vendor promises getting further and further away from reality. Search my blog for stretched VLANs, sofware-defined or intent-based if you’re looking for more fun reading.
Is there anything we can do? Sure. Focus on how things really work, understand the fundamentals, take responsibility for your decisions and your design, and make things as simple as possible by moving complexity to the most appropriate spot in the application stack… the lesson cloud-native application developers digested years ago because the large public cloud providers aren’t stupid enough to risk the stability of their infrastructure to cater to people who can’t spell scale-out, routing or DNS.
Want to know more?
I covered these topics in Business Aspects of Networking Technologies, described software-based switching in Network Function Virtualization and talked about the need for sane application architectures in Designing Active-Active and Disaster Recovery Data Centers. All there webinars are part of Standard ipSpace.net Subscription.