Yeah, Blame It on Cisco
A Technology Market Builder (in his own words) from a major networking vendor decided to publish a thought leadership article (in my sarcastic words) describing how Cisco’s embrace of complexity harmed the whole networking industry.
Let’s see how black this kettle-blaming pot really is ;), and make sure to have fun reading the comments to the original article.
As with the similar post I published a while ago, I’m not picking on Brocade. They are approximately as good or bad as most other networking vendors. I just can’t stand people publishing such obvious nonsense on third-party web sites.
First, I haven’t seen a single networking vendor that would strive to make their products less complex. Every single one of them (maybe excluding a startup or two that ride on simplification message) happily embraces every single broken design or stupidity their customers face and try to solve it applying approaches straight from RFC 1925 sections 2.5, 2.6 and 2.11.
It is true, however, that Cisco was (so far) more successful than others, so what every vendor marketer is really saying is “I’m so jealous that Cisco can get away with this better than we do.”
Now for some individual gems from that article:
Ethernet and IP networking is embarrassingly complex
… coming straight from a guy working for a company that helped create Fibre Channel ;)
Network technology has changed very little since the late 1980s, with the exception of faster speeds/feeds and some additional protocols and features.
Replace ‘1980s’ with ‘1994’ and you have a perfect description of Fibre Channel. Anyway, I know I wrote a rant on the same topic after reading similar claims from another networking vendor, but can’t find it. Too bad, it would be time to kill this particular zombie.
I could go on an on, but let’s skip to the “solutions”:
Fabric technology, for both the Ethernet and IP layers, substantially simplifies networking.
Yeah, that’s why Brocade uses FSPF in control plane, while everyone else uses IS-IS, and Cisco uses FabricPath encapsulation while everyone else uses TRILL or SPB, and why Brocade implemented proprietary transport of STP BPDUs over their fabric, and proprietary VRRP extensions. And they try to avoid complexity so hard that they have multiple solutions for inter-DC fabrics.
Software Defined Networking (SDN), with network intelligence and control centralized for automation, advanced control, and integration of the network with business applications, will provide a layer of abstraction above network hardware.
SDN must be a huge buzzword with some vendors. This isn’t their first masterpiece on the topic, and it probably won’t be the last.
As you don't have the chutzpah to use your real name (or at least a long-term alias) I won't even start arguing the technical points with you.
Anyway, let me reiterate:
A) no major vendor is significantly better or worse than others;
B) Ethernet fabrics and SDN are as messed up as any other networking technology if not more.
As for who's paying my bills, you couldn't be more wrong. And who's paying yours?
The funny thing about the linkedin article is that its from Brocade.
On a positive note, with more and more webscale companies going white box (or branded whitebox) and writing their own OS, I'm at least seeing more decision-makers asking "why", and that is the first step. Revolutions begin like this...
Because they consistently invested in building mindshare for the last 25 years:
* Publicly available documentation and release notes;
* Design guides and technical whitepapers that are more than blatant self-promotion;
* Reasonably-good training (don't get me started on Cisco training, but I've seen some others);
* Certifications that have actual career value;
* Above-average support;
* Conferences with high-quality presentations;
* Conference presentations and videos being publicly available
* ...
If you do that just 5% better than the average, the compound effect over 25 years reaches 240% (1.05^25-1). I'd say that Cisco (overall) is way more than 5% better than average.
Makes sense?
Apple just bought something like 5000x NCS5500 .... Web scale companys just care about uptime and capex and put their own software on it.
Observation: HPE did create a very simple network edge in Virtual Connect, which has been on the market for almost a decade. Being simple meant it was in a server paradigm and not a network paradigm. This works for some people and not for others.
Crystal ball (at which my track record is mixed): I believe that in some university somewhere in the world, a couple of graduate students will be trying to integrate data center networking into yet another Ansible/Puppet/Chef type application deployment framework, will get frustrated trying to deal with networking as we know it, and will write a little shim that runs over the Broadcom SDK (or one of its competitors) which solves their problem deploying and running applications (is a good enough underlay for Calico, for example). They'll use white box switches because there were a few sitting around (they were cheap). This software will turn out to be generally useful in Ethernet-backbone supercomputers and in an interesting subset of data centers. It is not networking as we know it. Bridging this to the real network is an "interesting" problem, with precedents in InfiniBand to Ethernet gateways.
-steve
@FStevenChalmers
However, I doubt we'll see any significant change in the way networking works (there are only so many ways to solve a problem given external constraints) as long as we have to keep at least the socket API intact. Would love to be surprised but remain skeptical...
Google has the resources to reinvent that complexity, while my hypothetical graduate students simply cannot.
For example, imagine a data center plumbed with Sockets Direct Protocol, using Calico-like microsegmentation, and using a DNS-like service to find endpoints (sockets). The application wouldn't care whether the underlay was good old Ethernet, ACI, InfiniBand, Omni-Path, or whatever gets invented next.
And yes, you are right about the external constraints. A Google data center has millions of north/south connections to the outside world (and that's just to serve the ads) as well as inter-data-center traffic. The place where "dumb and simple" inside the data center meets "real networking" at the data center edge is challenging, but doable, as InfiniBand has shown.
Will be interesting to watch over the next 20 years and see how this plays out.
-steve