How Complex Is Your Data Center?

Sometimes it seems like the networking vendors try to (A) create solutions in search of problems, (B) boil the ocean, (C) solve the scalability problems of Google or Amazon instead of focusing on real-life scenarios or (D) all of the above.

Bryan Stiekes from HP decided to do a step in the right direction: let’s ask the customers how complex their data centers really are. He created a data center complexity survey and promised to share the results with me (and you), so please do spend a few minutes of your time filling it in. Thank you!


  1. Started the survey but I can't add comments in fields that require just a number, and just a number would be incredibly misleading.

    For example network changes/additions happen in bursts, low percent of virtualized compute can indicate slow pace of change or lots of workloads where virtualization would be just an additional overhead.

    Also no mention of SAN, whiteboxes in any role, or automatization, or capacity, or private clouds. Far from comprehensive.

    This survey won't give any clear answers or insight. It'll be just an "average temperature for all patients in the whole hospital".

    If Bryan wants to have pure numbers then this survey will produce pure bullshit which would be useless at best and harmful at worst. After all, every network is a unique snowflake.
    1. Thanks for the feedback! I added a comment section to the end of the survey if you feel any of your responses need amplification or qualification.

      Also, study after study has already been done on SANs, whiteboxes, disaggregation, automation, capacity, private clouds, virtualization, public clouds and so on.

      Interestingly enough, however, it's difficult to impossible to find anyone looking at the complexity of the underlays supporting datacenter workloads at all, let alone attempting to see if there is any correlation to outage. There's been some rumor of intended or attempted research on this topic in academia, but nothing that I've been able to find. (If anyone is aware of research on this topic which I haven't come across, more sources are always better, send them my way via Twitter (@Stiekes).

      So rather than attempting to build yet another comprehensive survey and bury what I'm actually interested in in a cloud of data, I decided instead to focus on configuration complexity and some outage information. The other stuff is for context and charactarization.

      Maybe we'll see there's a correlation, maybe we won't, but if we don't look all we'll have is anecdote and intuition - hard to get investment with that.

      Thanks for the insights and even more for plugging in on the survey!

    2. I think I see what you're angling for, but you still won't be able to see if this number is small because they automate or that number is large because they have additional SAN config on top of ethernet network, do their config size is indicative of their complexity or they just use a different vendor with less consise CLI, are their core routers unnecessary bloated with config or they're used as edge as well..

      I've completed it but I think that adding a comment field for every number would be more useful than one big comment field in the end. It's hard to summarize everything but easy to add small comments on why this or that number deviates from perceived normal state in the mind of the reviewer. It'll also show you what people believe is normal.

      I hope this survey will give you some actual and useful information, but I'm guessing that it'll be mostly from contact fields and "I wanna chat about it" button. :-) Anecdotal evidence is analyzable if there is enough of it.

Add comment