Response: Why Technology Still Matters

My good friend Tom Hollingsworth wrote a great blog post about hypermyopia in the networking industry. I agree with most everything he wrote (I have to – I’m always telling people to focus on business needs and to change their mentality before relying on shiny new gizmos), but I still think it’s crucial to consider the technology used in products we’re looking at.

A fantastic technology that has no chance of working in real life, or that doesn’t solve an actual business need is a waste of developers’ energy and everyone’s time (but most vendors don’t get it). On the other hand, a product that looks awesome on PowerPoint slides (or in eye-candy demos) could have a broken architecture or might use wrong technology for the job – and it’s impossible to fix these errors without significant reengineering (if at all).

I honestly don’t care about the technology established vendors with long track records use in their mature products. We know how well (or badly) those products perform, and there’s plenty of documented hands-on experience to form an opinion. For example, I don’t care what technology VMware uses in vCenter or what protocols they run between vCenter and ESX hosts. I also (mostly) trust Cisco’s NX-OS verified maximums.

Startups and emerging products from established vendors are a different story. Startups try to dazzle us with shining past careers of their lead developers (message: trust us, these guys know what they’re doing), and established vendors play on the success of their past (sometimes unrelated) products.

However, I remain skeptical when a major server/printer vendor tells me how great they are going to be in data center networking (and I was right: it took them years to get their story straight), and I always try to understand the technology used by startups to be able to evaluate whether their product has a fighting chance when faced with reality.

Obviously it’s impossible to judge the implementation quality of a product without a large-scale trial, but some architectures are so broken that it doesn’t take much to figure out they won’t work without major changes… and if a startup evades hard questions it’s usually a red flag. For a counterpoint, watch the precise and factual answers we got from Carly Stoughton (including “we don’t do that” or “it’s on the roadmap”) in Cisco ACI presentation during Networking Field Day 9.

Case in point: large-scale OpenFlow-based data center fabrics. When I first heard about them (and read OpenFlow 1.0 standards), I said “this can never work”… and I was mostly right.

NEC got their product to work, but only after implementing numerous OpenFlow 1.0 extensions (they got rid of most of them when they introduced OpenFlow 1.3 support), but the last time I checked they still weren’t running control-plane protocols (LACP, LLDP, STP) with hosts attached to the fabric.

It took Big Switch Networks several failed starts and pivots before they figured out large-scale OpenFlow fabrics can never work with the current version of OpenFlow (want to know more: watch my OpenFlow Deep Dive webinar). I can’t tell you how well their product works in practice, but at least its current architecture seems to be scalable. On the other hand, their changes to OpenFlow standard prevent them from using regular OpenFlow switches, which makes them equivalent to Juniper QFabric or Cisco ACI from lock-in perspective.

So, yes, I think the architectures and technologies still matter, but only after we figured out what problem needs to be solved and what the best (business and process) way of solving it is.

Disclosure: Cisco Systems and some other vendors mentioned in this blog post were indirectly covering some of the costs of my attendance at the Network Field Day events. More…

Add comment
Sidebar