Category: fabric
Why Didn’t We Have Leaf-and-Spine Fabrics a Decade Ago?
One of my readers watched my Leaf-and-Spine Fabric Architectures webinar and had a follow-up question:
You mentioned 3-tier architecture was dictated primarily by port count and throughput limits. I can understand that port density was a problem, but can you elaborate why the throughput is also a limitation? Do you mean that core switch like 6500 also not suitable to build a 2-tier network in term of throughput?
As always, the short answer is it depends, in this case on your access port count and bandwidth requirements.
Leaf-and-Spine Fabrics versus Fabric Extenders
One of my readers wondered what the difference between fabric extenders and leaf-and-spine fabrics is:
We are building a new data center for DR and we management is wanting me to put in recommendations to either stick with our current Cisco 7k to 2k ToR FEX solution, or prepare for what seems to be the future of DC in that spine leaf architecture.
Let’s start with “what is leaf-and-spine architecture?”
EVPN: All that Glitters Is Not Gold
Cumulus Linux 3.2 shipped with a rudimentary EVPN implementation and everyone got really excited, including smaller ASIC manufacturers that finally got a control plane for their hardware VTEP functionality.
However, while it’s nice to have EVPN support in Cumulus Linux, the claims of its benefits are sometimes greatly exaggerated.
Why Are High-Speed Links Better than Port Channels or ECMP
I’m positive I’ve answered this question a dozen times in various blog posts and webinars, but it keeps coming back:
You always mention that high speed links are always better than parallel low speed links, for example 2 x 40GE is better than 8 x 10GE. What is the rationale behind this?
Here’s the N+1-th answer (hoping I’m being consistent):
Q&A: Building a Layer-2 Data Center Fabric in 2016
One of my readers designing a new data center fabric that has to provide L2 transport across the data center sent me this observation:
While we don’t have plans to seek an open solution in our DC we are considering ACI or VXLAN with EVPN. Our systems integrator partner expressed a view that VXLAN is still very new. Would you share that view?
Assuming he wants to stay with Cisco, what are the other options?
Building a L3-Only Data Center with Cumulus Linux
Dinesh Dutt was the guest speaker in the second Leaf-and-Spine Fabric Design session. After I explained how you can use ARP/ND information to build a layer-3-only data center fabric that still supports IP address mobility Dinesh described the details of Cumulus Linux redistribute ARP functionality and demoed how it works in a live data center.
Would You Use Avaya's SPBM Solution?
Got this comment to one of my L2-over-VXLAN blog posts:
I found the Avaya SPBM solution "right on the money" to build that L2+ fabric. Would you deploy Avaya SPBM?
Interestingly, I got that same question during one of the ExpertExpress engagements and here’s what I told that customer:
Save the date: Leaf-and-Spine Fabric Design Workshop in Zurich
Do you believe in vendor-supplied black box (regardless of whether you call it ACI or SDDC) or in building your own data center fabric using solid design principles?
It should be an easy choice if believe a business should control its own destiny instead of being pulled around by vendor marketing (to paraphrase Russ White)
Ansible versus Puppet in Initial Device Provisioning
One of the attendees of my Building Next-Generation Data Center course asked this interesting question after listening to my description of differences between Chet/Puppet and Ansible:
For Zero-Touch Provisioning to work, an agent gets installed on the box as a boot up process that would contact the master indicating the box is up and install necessary configuration. How does this work with agent-less approach such as Ansible?
Here’s the first glitch: many network devices don’t ship with Puppet or Chef agent; you have to install it during the provisioning process.
Replacing FabricPath with VXLAN, EVPN or ACI?
One of my friends plans to replace existing FabricPath data center infrastructure, and asked whether it would make sense to stay with FabricPath (using the new Nexus 5600 switches) or migrate to ACI.
I proposed a third option: go with simple VXLAN encapsulation on Nexus 9000 switches. Here’s why:
Why Would I Use BGP and not OSPF between Servers and the Network?
While we were preparing for the Cumulus Networks’ Routing on Hosts webinar Dinesh Dutt sent me a message along these lines:
You categorically reject the use of OSPF, but we have a couple of customers using it quite happily. I’m sure you have good reasons, and the reasons you list [in the presentation] are ones I agree with. OTOH, why not use totally stubby areas with hosts in such an area?
How about:
Using BGP in Leaf-and-Spine Fabrics
In the Leaf-and-Spine Fabric Designs webinar series we started with the simplest possible design: non-redundant server connectivity with bridging within a ToR switch and routing across the fabric.
After I explained the basics (including routing protocol selection, route summarization, link aggregation and addressing guidelines), Dinesh Dutt described how network architects use BGP when building leaf-and-spine fabrics.
Scaling L3-Only Data Center Networks
Andrew wondered how one could scale the L3-only data center networking approach I outlined in this blog post and asked:
When dealing with guests on each host, if each host injects a /32 for each guest, by the time the routes are on the spine, you're potentially well past the 128k route limit. Can you elaborate on how this can scale beyond 128k routes?
Short answer: it won’t.
Building a L2 Fabric on top of VXLAN: Arista or Cisco?
One of my readers working as an enterprise data center architect sent me this question:
I've just finished a one-week POC with Arista. For fabric provisioning and automation, we were introduced to CloudVision. My impression is that there are still a lot of manual processes when using CloudVision.
Arista initially focused on DIY people and those people loved the tools Arista EOS gave them: Linux on the box, programmability, APIs… However
Feedback: Layer-2 Leaf-and-Spine Fabrics
Occasionally I get feedback that makes me say “it’s worth doing the webinars ;)”. Here’s one I got after the layer-2 session of Leaf-and-Spine Fabric Designs webinar:
I work at a higher level of the stack, so it was a real eye opener especially with so much opinionated "myths" on the web that haven't been critically challenged such as [the usefulness of] STP.
There’s more feedback on this web page where you can also buy the webinar recording (or register for the next session of the webinar once they are scheduled).