Yeah, Blame It on Cisco

A Technology Market Builder (in his own words) from a major networking vendor decided to publish a thought leadership article (in my sarcastic words) describing how Cisco’s embrace of complexity harmed the whole networking industry.

Let’s see how black this kettle-blaming pot really is ;), and make sure to have fun reading the comments to the original article.

As with the similar post I published a while ago, I’m not picking on Brocade. They are approximately as good or bad as most other networking vendors. I just can’t stand people publishing such obvious nonsense on third-party web sites.

First, I haven’t seen a single networking vendor that would strive to make their products less complex. Every single one of them (maybe excluding a startup or two that ride on simplification message) happily embraces every single broken design or stupidity their customers face and try to solve it applying approaches straight from RFC 1925 sections 2.5, 2.6 and 2.11.

It is true, however, that Cisco was (so far) more successful than others, so what every vendor marketer is really saying is “I’m so jealous that Cisco can get away with this better than we do.”

Now for some individual gems from that article:

Ethernet and IP networking is embarrassingly complex

… coming straight from a guy working for a company that helped create Fibre Channel ;)

Network technology has changed very little since the late 1980s, with the exception of faster speeds/feeds and some additional protocols and features.

Replace ‘1980s’ with ‘1994’ and you have a perfect description of Fibre Channel. Anyway, I know I wrote a rant on the same topic after reading similar claims from another networking vendor, but can’t find it. Too bad, it would be time to kill this particular zombie.

I could go on an on, but let’s skip to the “solutions”:

Fabric technology, for both the Ethernet and IP layers, substantially simplifies networking.

Yeah, that’s why Brocade uses FSPF in control plane, while everyone else uses IS-IS, and Cisco uses FabricPath encapsulation while everyone else uses TRILL or SPB, and why Brocade implemented proprietary transport of STP BPDUs over their fabric, and proprietary VRRP extensions. And they try to avoid complexity so hard that they have multiple solutions for inter-DC fabrics.

Software Defined Networking (SDN), with network intelligence and control centralized for automation, advanced control, and integration of the network with business applications, will provide a layer of abstraction above network hardware.

SDN must be a huge buzzword with some vendors. This isn’t their first masterpiece on the topic, and it probably won’t be the last.


  1. The author does have many good points and your counter points aren't that strong. You missed out in Cisco hardware lock in for most of their solutions. What about ripping 5ks for 9ks? Proprietary protocols like EIGRP and VTP? Every vendor is going to have some form of secret sauce of proprietary but Cisco just takes it to the next level. And seeing how you use Cisco to pay your bills I can see your bias.
    1. Dear Anonymous,

      As you don't have the chutzpah to use your real name (or at least a long-term alias) I won't even start arguing the technical points with you.

      Anyway, let me reiterate:

      A) no major vendor is significantly better or worse than others;
      B) Ethernet fabrics and SDN are as messed up as any other networking technology if not more.

      As for who's paying my bills, you couldn't be more wrong. And who's paying yours?
    2. Well put!

      The funny thing about the linkedin article is that its from Brocade.
    3. This comment has been removed by the author.
  2. I understand the frustration of the person who wrote the article. In my work experience, I have faced the challenge of the 800 lb gorilla to one degree or another. I have provided solution options based on requirements for large and small efforts and I am baffled by how often Cisco is selected despite the fact that they are clearly the most expensive, sometimes by a large margin. When I ask why, they essentially quote the "never get fired for buying Cisco" line. I even had one VP shrug and say that he's never even heard of the other vendors (he doesn't follow the industry). So, gripe about their technology, their approach, the complexity... it doesn't matter. It doesn't even matter if you have a legitimately better/cheaper/faster product. Cisco has successfully waged their marketing/culture/perception campaign and THAT is where your challenges lie. If you can't counter this, then best of luck to you. Arista and others have successfully nibbled at this in carrier/cloud/trading markets, but not so much in enterprise markets.

    On a positive note, with more and more webscale companies going white box (or branded whitebox) and writing their own OS, I'm at least seeing more decision-makers asking "why", and that is the first step. Revolutions begin like this...
    1. In summary: "why Cisco can get away with this better than $vendor?” ;))

      Because they consistently invested in building mindshare for the last 25 years:

      * Publicly available documentation and release notes;
      * Design guides and technical whitepapers that are more than blatant self-promotion;
      * Reasonably-good training (don't get me started on Cisco training, but I've seen some others);
      * Certifications that have actual career value;
      * Above-average support;
      * Conferences with high-quality presentations;
      * Conference presentations and videos being publicly available
      * ...

      If you do that just 5% better than the average, the compound effect over 25 years reaches 240% (1.05^25-1). I'd say that Cisco (overall) is way more than 5% better than average.

      Makes sense?
    2. Btw. webscale companys still buy from Cisco.

      Apple just bought something like 5000x NCS5500 .... Web scale companys just care about uptime and capex and put their own software on it.
  3. First, want to be extraordinarily clear that what I say here is my own opinion, and is not the position of my employer (which happens to be Hewlett Packard Enterprise).

    Observation: HPE did create a very simple network edge in Virtual Connect, which has been on the market for almost a decade. Being simple meant it was in a server paradigm and not a network paradigm. This works for some people and not for others.

    Crystal ball (at which my track record is mixed): I believe that in some university somewhere in the world, a couple of graduate students will be trying to integrate data center networking into yet another Ansible/Puppet/Chef type application deployment framework, will get frustrated trying to deal with networking as we know it, and will write a little shim that runs over the Broadcom SDK (or one of its competitors) which solves their problem deploying and running applications (is a good enough underlay for Calico, for example). They'll use white box switches because there were a few sitting around (they were cheap). This software will turn out to be generally useful in Ethernet-backbone supercomputers and in an interesting subset of data centers. It is not networking as we know it. Bridging this to the real network is an "interesting" problem, with precedents in InfiniBand to Ethernet gateways.


    1. I would say that what you've seen in your crystal ball has already happened at large cloud providers/web properties and is slowly trickling down toward the mass adoption.

      However, I doubt we'll see any significant change in the way networking works (there are only so many ways to solve a problem given external constraints) as long as we have to keep at least the socket API intact. Would love to be surprised but remain skeptical...
    2. Good catch. I was trying to make the point that moving packets between two endpoints (two sockets interfaces seen by application components) in a data center is a lot simpler than what NICs and switches do today.

      Google has the resources to reinvent that complexity, while my hypothetical graduate students simply cannot.

      For example, imagine a data center plumbed with Sockets Direct Protocol, using Calico-like microsegmentation, and using a DNS-like service to find endpoints (sockets). The application wouldn't care whether the underlay was good old Ethernet, ACI, InfiniBand, Omni-Path, or whatever gets invented next.

      And yes, you are right about the external constraints. A Google data center has millions of north/south connections to the outside world (and that's just to serve the ads) as well as inter-data-center traffic. The place where "dumb and simple" inside the data center meets "real networking" at the data center edge is challenging, but doable, as InfiniBand has shown.

      Will be interesting to watch over the next 20 years and see how this plays out.

  4. Well said Ivan! Excellent points!
  5. Don't forget about human factor - not that many people are willing to actively learn something new and training staff to be able to support another vendor can be a big effort as well as business risk.
Add comment