Pragmatic Data Center Fabrics

I always love to read the practical advice by Andrew Lerner. Here’s another gem that matches what Brad Hedlund, Dinesh Dutt and myself (plus numerous others) have been saying for ages:

One specific recommendation we make in the research is to “Build a rightsized physical infrastructure by using a leaf/spine design with fixed-form factor switches and 25/100G capable interfaces (that are reverse-compatible with 10G).”

There’s a slight gotcha in that advice: it trades implicit complexity of chassis switches with explicit complexity of fixed-form switches.

The rest of the blog post contains links to resources that you might find useful if you want to follow Andrew's advice and build a modular data center fabrics. You've been warned ;)

Day 2 Operations

While it’s perfectly possible to operate small fabrics as a CLI Jockey, once your fabric grows you’ll quickly appreciate any level of automation you could get. At that point, you’ll have to decide whether to offload the complexity back to the vendor, buy a black-box solution from the same vendor, buy an orchestration (oops, intent-based) system from another vendor, or build your own solution from smaller components.

Master Modern Data Center Infrastructure

Don’t worry – building modern data center infrastructure isn’t exactly rocket science. You can learn all you need to know about leaf-and-spine fabric designs in this webinar, explore what vendors are doing in this one, or enjoy EVPN technical deep dive with Dinesh Dutt. You get all three webinars plus dozens of others with subscription.

Alternatively, if you prefer a guided/mentored tour with lots of checks and homework on the way, check out the Designing and Building Data Center Fabrics online course, or Building Next-Generation Data Center if you want to understand a wider range of data center technologies.

Finally, I can’t help you much if your management decides to sponsor a vendor and buy a black box apart from pointing out the obvious drawbacks, as I did in numerous SDN webinars… but if you want to become an automation solution builder check out the Building Network Automation Solutions online course, which might come handy even if you go with a black-box solution – at least you’ll be able to identify which problems it should be able to solve, what components it should have, and what layers of abstraction it should offer.


  1. Why would you do such a big effort for just hosting your mainframe? 2 switches and 2 routers are enough. Everybody goes to the cloud. You can't compete with the cloud.
    1. As I was saying for a long time ... ;)

      As for "competing with the cloud" - I hear various opinions from people who know better than I do, so it looks to me like saying "you can't compete with public transport".
  2. In todays data centers it's all about cost. If enterprises could take the risk of operating just one core switch, they would do so. Many companies are therefore forced to buy at least 2 pieces of it. So chassis switches will have a future. And non-blocking fabrics are a dream. Maybe I live in another world.
    1. ... maybe you don't need more than two fixed-sized switches:

      ... maybe a leaf-and-spine fabric is less complex and cheaper than two core switches:

      As always, there is no right answer.
  3. In my opinion Ivan's indoctrinated theory pretty much diverges with the reality.
    1. Do tell me more, would you? ;)
  4. Now that Garner is pushing leaf/spine, does that mean it's officially obsolete? ;-)
    1. Not really. It just means Gartner is really late, considering CLOS networks exist since 1952 (

      A network topology, being a physical implementation of a mathematical object with some desired properties, doesn't really become obsolete, it either solves the problem at hand or not. In the latter case you need another topology (
Add comment