Category: Data Center
Odd Number of Spines in Leaf-and-Spine Fabrics
In the market overview section of the introductory part of data center fabric architectures webinar I made a recommendation to use larger number of fixed-configuration spine switches instead of two chassis-based spines when building a medium-sized leaf-and-spine fabric, and explained the reasoning behind it (increased availability, reduced impact of spine failure).
One of the attendees wondered about the “right” number of spine switches – does it has to be four, or could you have three or five spines. In his words:
Interview: Active-Active Data Centers With VXLAN and EVPN
Christoph Jaggi asked me a few questions about using VXLAN with EVPN to build data center fabrics and interconnects (including active/active data centers). The German version was published on Inside-IT; here’s the English version.
He started with an obvious one:
What is an active-active data center, and why would I want to use it?
Numerous organizations have multiple data centers for load sharing or disaster recovery purposes. They could use one of their data centers and have the other(s) as warm or cold standby (active/backup setup) or use all data centers at the same time (active/active).
Using MPLS+EVPN in Data Center Fabrics
Here’s a question I got from someone attending the Building Next-Generation Data Center online course:
Cisco NCS5000 is positioned as a building block for a data center MPLS fabric – a leaf-and-spine fabric with MPLS and EVPN control plane. This raised a question regarding MPLS vs VXLAN: why would one choose to build an MPLS-based fabric instead of a VXLAN-based one assuming hardware costs are similar?
There’s a fundamental difference between MPLS- and VXLAN-based transport: the amount of coupling between edge and core devices.
Observability Is the New Black
In early October I had a chat with Dinesh Dutt discussing the outline of the webinar he’ll do in November. A few days later Fastly published a blog post on almost exactly the same topic. Coincidence? Probably… but it does seem like observability is the next emerging buzzword, and Dinesh will try to put it into perspective answering these questions:
VMware NSX: The Good, the Bad and the Ugly
After four live sessions we finished the VMware NSX Technical Deep Dive webinar yesterday. Still have to edit the materials, but right now the whole thing is already over 6 hours long, and there are two more guest speaker sessions to come.
Anyways, in the previous sessions we covered all the good parts of NSX and a few of the bad ones. Everything that was left for yesterday were the ugly parts.
Automation Win: Configure Cisco ACI with an Ansible Playbook
This blog post was initially sent to subscribers of my mailing list. Subscribe here.
Following on his previous work with Cisco ACI Dirk Feldhaus decided to create an Ansible playbook that would create and configure a new tenant and provision a vSRX firewall for the tenant when working on the Create Network Services hands-on exercise in the Building Network Automation Solutions online course.
Leaf-and-Spine Fabric Myths (Part 3)
Evil CCIE concluded his long list of leaf-and-spine fabric myths (more in part 1 and part 2) with a layer-2 fabric myth:
Layer 2 Fabrics can't be extended beyond 2 Spine switches. I had a long argument with a $vendor guys on this. They don't even count SPB as Layer 2 fabric and so forth.
The root cause of this myth is the lack of understanding of what layer-2, layer-3, bridging and routing means. You might want to revisit a few of my very old blog posts before moving on: part 1, part 2, what is switching, layer-3 switches and routers.
Leaf-and-Spine Fabric Myths (Part 2)
The next set of Leaf-and-Spine Fabric Myths listed by Evil CCIE focused on BGP:
BGP is the best choice for leaf-and-spine fabrics.
I wrote about this particular one here. If you’re not a BGP guru don’t overcomplicate your network. OSPF, IS-IS, and EIGRP are good enough for most environments. Also, don’t ever turn BGP into RIP with AS-path length serving as hop count.
VXLAN and EVPN on Hypervisor Hosts
One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:
I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.
Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.
Leaf-and-Spine Fabric Myths (Part 1)
Apart from the “they have no clue what they’re talking about” observation, Evil CCIE left a long list of leaf-and-spine fabric myths he encountered in the wild in a comment on one of my blog posts. He started with:
Clos fabric (aka Leaf And Spine fabric) is a non-blocking fabric
That was obviously true in the days when Mr. Clos designed the voice switching solution that still bears his name. In the original Clos network every voice call would get a dedicated path across the fabric, and the number of voice calls supported by the fabric equaled the number of alternate end-to-end paths.