Data Center Fabric Designs: Size Matters
The “should we use the same vendor for fabric spines and leaves?” discussion triggered the expected counterexamples. Here’s one:
I actually have worked with a few orgs that mix vendors at both spine and leaf layer. Can’t take names but they run fairly large streaming services. To me it seems like a play to avoid vendor lock-in, drive price points down and be in front of supply chain issues.
As always, one has to keep two things in mind:
- Size matters. Straight out of RFC 3439:
In particular, the largest networks exhibit, both in theory and in practice, architecture, design, and engineering non-linearities which are not exhibited at smaller scale.
- Complexity matters, and large networks try to avoid it as much as possible because they have plenty of other problems. For example, a large streaming service probably does not run EVPN route reflectors on their spine switches.
If one would try to group data center fabrics based on their size, one might get these categories:
- Hyperscalers. They’re doing whatever it is they’re doing. Some of them are silent (AWS), and others boast how smart they are (Google), even though whatever they’re doing is irrelevant to almost everyone.
- Large IP fabrics (including content providers and non-VMware public clouds). They’re running some subset of OSPF/IS-IS/IBGP/EBGP. Of course, you can use that on any mix of vendor boxes; that’s how we run the global Internet.
- Enterprise data centers. Most don’t need more than two switches per site. Few organizations need more than a single leaf-and-spine fabric with four (or six or eight) spines.
Building a single enterprise data center fabric with switches from multiple vendors is thus primarily an exercise in futility – the operational costs of dealing with multiple operating systems, vendor support centers, and tooling quirks probably outweigh the acquisition savings.
As always, there are exceptions:
- You have no control over your purchasing department; their bonus depends on how far they squeeze the vendors. It’s time to polish your resume.
- Your workforce is so underpaid that it’s cheaper to deal with perpetual multi-vendor quirks than buying potentially more expensive boxes.
- You must import the boxes and pay them with hard-to-get “hard currency.” Unfortunately, I was there decades ago, and it makes you incredibly “creative.”
- You’re boosting your resume.
However, please keep in mind that most people searching for information on the public Internet belong to the “two switches” or “small fabric” crowds. As Jeff wrote in a comment to my Rant: Multi-Vendor EVPN Fabrics blog post:
Average company is blessed beyond their wildest dreams to find an engineer who understands what a BPDU is and that bridging loops are bad. L2oL3, MP-BGP AFIs, ESIs, VTEPs….not supportable by 99% of the wild. All just pushing the needle on sales. 4k VLAN limit isn’t a valid argument in the 99% either.
Please don’t confuse them with totally irrelevant edge cases and outliers.
Absolutly right. I'm working at a MSP and we do a lot of project work for enterprises which are between 500 and 2000 people. That means the IT department is not that big, it's usally just a costcenter for them.
Then, some one of us comes in (often the presales guys first) and make suggestions on what new shiny thing they could buy. ACI for your DC that doesn't change at all! NSX for your 4 VMware servers and you can save on networking! VXLAN EVPN for your 4 networking guys that do mostly client patching (the cables), never heard of an overlay and didn't touch BGP in their lives! YAY! Don't get me started on the whole automation stuff that is out there. Most companies are way in over their head with a lot of that stuff. And it all comes crashing down if the wrong guy leaves for another better paying job.
Be realistic and consult (you MSP guys) and buy (you enterprise guys) the right tool for your team that meets your needs. Maybe two or four switches with MLAG/vPC is not cool and shiny but it gets the job done and you can troubleshoot it at 2 in the night. Overlays in a campus are really practical and can help solve a lot of "issues" (mostly issues that are caused by someone who doesn't want or doesn't know how to do it better). But can you support it? As we started with campus fabrics (espacially Ciscos SDA) most customers where shocked to learn about dot1x and what it means to roll that out. And thats just the start of all that. We always recommend dot1x for different reasons, but most companies can't handle that on their own. Those are all the 99%.
And what are we all talking about? The new stuff for the 1%. And it takes years to see that trickle down to the 99%.
Sorry for the rant, but my 2cents.
With current compute density exceeding 50+ CPU cores and 1TB+ memory per 1-2RU servers the amount of pizza boxes required has gone down drastically. This impacts resiliency. In the past it was possible to create a N+1 design where a complete rack with a dozen virtualization hosts and associated TOR switches could go down (physical issues / TOR maintenance). Nowadays you can run the same virtual workloads on less hardware, so in the average enterprise datacenter you are down to just a handful of switches, which become more mission critical. It becomes even worse if those ‘average’ enterprises decide to go for a blade chassis and only offer 2-4 100G uplinks towards your network.