Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Building network automation solutions

6 week online course

Start now!
back to overview

Why Didn’t We Have Leaf-and-Spine Fabrics a Decade Ago?

One of my readers watched my Leaf-and-Spine Fabric Architectures webinar and had a follow-up question:

You mentioned 3-tier architecture was dictated primarily by port count and throughput limits. I can understand that port density was a problem, but can you elaborate why the throughput is also a limitation? Do you mean that core switch like 6500 also not suitable to build a 2-tier network in term of throughput?

As always, the short answer is it depends, in this case on your access port count and bandwidth requirements.

I know people who built their whole data center around a single Catalyst 6500, and others who built it around a pair of Catalysts using VSS to behave like a single box (so much for the originality of Arista’s spline concept). Obviously they didn’t need more bandwidth than what a single Catalyst (or a pair of them) could provide in those days.

I know other people who used stackable switched connected to a pair of central Catalyst 6500s. Yet again, the forwarding performance of a single box (or a VSS pair) was all they needed.

I also know people who built 3-tier data center networks even though they didn’t really need them just because they blindly followed vendor guidelines that haven’t changed in over 20 years (I remember the 3-tier network diagrams looking great in PPT slides in early 90s). Sometimes having too much money hurts you.

Finally, there were data centers that couldn’t have been built with a pair of core switches connecting access switches, either due to port density challenges or insufficient forwarding performance that required them to insert an intermediate layer of oversubscription. Those were the only data centers that really needed a 3-tier design from performance/throughput perspective. They also hurt most because the bandwidth between two endpoints depended on where in the data center the endpoints were – connected to the same switch, same distribution layer, same core switches, or across the core.

On the other hand, do keep in mind that a bunch of ToR switches connected to a pair of modular core switches in MLAG cluster is as much a leaf-and-spine fabric as is a fabric of pizza-box-sized whitebox switches, and if you ignore the oversubscription in the core switches, a lot of data centers already have leaf-and-spine fabrics without knowing it.

Beyond Fabrics

The Building the Next-Generation Data Center online course goes way beyond network fabrics and covers infrastructure topics like storage, compute, and network services, compute, storage and network virtualization, and multi-DC and cloud deployments.

Interested? Register here.

3 comments:

  1. We were close but many were trained on the classical core, distribution and access model and based on OS offerings at the time. I had some clients back then where we did do all L3 with the core acting as a core/dist(spine) and the access(for WS an Servers) just as leafs(tors) all L3 and with the 2 switch approach too. But many of the training guides/texts etc still had the 3 tier hierarchy outline.

    ReplyDelete
  2. "I know people who built their whole data center around a single Catalyst 6500"

    I walked into this scenario, then the 6509 chassis failed. I am vehemently opposed to chassis solutions because of this. Never again. I <3 spine/leaf(clos)... so hard... so scalable... ...

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Sidebar