Category: data center

Automation Win: Configure Cisco ACI with an Ansible Playbook

This blog post was initially sent to subscribers of my mailing list. Subscribe here.

Following on his previous work with Cisco ACI Dirk Feldhaus decided to create an Ansible playbook that would create and configure a new tenant and provision a vSRX firewall for the tenant when working on the Create Network Services hands-on exercise in the Building Network Automation Solutions online course.

read more see 1 comments

Leaf-and-Spine Fabric Myths (Part 3)

Evil CCIE concluded his long list of leaf-and-spine fabric myths (more in part 1 and part 2) with a layer-2 fabric myth:

Layer 2 Fabrics can't be extended beyond 2 Spine switches. I had a long argument with a $vendor guys on this. They don't even count SPB as Layer 2 fabric and so forth.

The root cause of this myth is the lack of understanding of what layer-2, layer-3, bridging and routing means. You might want to revisit a few of my very old blog posts before moving on: part 1, part 2, what is switching, layer-3 switches and routers.

read more see 4 comments

Leaf-and-Spine Fabric Myths (Part 2)

The next set of Leaf-and-Spine Fabric Myths listed by Evil CCIE focused on BGP:

BGP is the best choice for leaf-and-spine fabrics.

I wrote about this particular one here. If you’re not a BGP guru don’t overcomplicate your network. OSPF, IS-IS, and EIGRP are good enough for most environments. Also, don’t ever turn BGP into RIP with AS-path length serving as hop count.

read more see 4 comments

VXLAN and EVPN on Hypervisor Hosts

One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:

I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.

Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.

read more see 5 comments

Leaf-and-Spine Fabric Myths (Part 1)

Apart from the “they have no clue what they’re talking about” observation, Evil CCIE left a long list of leaf-and-spine fabric myths he encountered in the wild in a comment on one of my blog posts. He started with:

Clos fabric (aka Leaf And Spine fabric) is a non-blocking fabric

That was obviously true in the days when Mr. Clos designed the voice switching solution that still bears his name. In the original Clos network every voice call would get a dedicated path across the fabric, and the number of voice calls supported by the fabric equaled the number of alternate end-to-end paths.

read more see 2 comments

Network Troubleshooting Guidelines

It all started with an interesting weird MLAG bugs discussion during our last Building Next-Generation Data Center online course. The discussion almost devolved into “when in doubt reload” yammering when Mark Horsfield stepped in saying “while that may be true, make sure to check and collect these things before reloading”.

I loved what he wrote so much that I asked him to turn it into a blog post… and he made it even better by expanding it into generic network troubleshooting guidelines. Enjoy!

read more see 4 comments

Implications of Valley-Free Routing in Data Center Fabrics

As I explained in a previous blog post, most leaf-and-spine best-practices (as in: what to do if you have no clue) use BGP as the IGP routing protocol (regardless of whether it’s needed) with the same AS number shared across all spine switches to implement valley-free routing.

This design has an interesting consequence: when a link between a leaf and a spine switch fails, they can no longer communicate.

read more see 14 comments

VXLAN Broadcast Domain Size Limitations

One of the attendees of my Building Next-Generation Data Center online course tried to figure out whether you can build larger broadcast domains with VXLAN than you could with VLANs. Here’s what he sent me:

I’m trying to understand differences or similarities between VLAN and VXLAN technologies in a view of (*cast) domain limitation.

There’s no difference between the two on the client-facing side. VXLAN is just an encapsulation technology and doesn’t change how bridging works at all (read also part 2 of that story).

read more see 3 comments

Routing in Data Center: What Problem Are You Trying to Solve?

Here’s a question I got from an attendee of my Building Next-Generation Data Center online course:

As far as I understood […] it is obsolete nowadays to build a new DC fabric with routing on the host using BGP, the proper way to go is to use IGP + SDN overlay. Is my understanding correct?

Ignoring for the moment the fact that nothing is ever obsolete in IT, the right answer is it depends… this time on answer(s) to two seemingly simple questions “what services are we offering?” and “what connectivity problem are we trying to solve?”.

read more see 4 comments

Repost: Tony Przygienda on Valley-Free (or Non-ZigZag) Routing

Most blog posts generate the usual noise from the anonymous peanut gallery (if only they’d have at least a sliver of Statler and Waldorf in them), but every now and then there’s a comment that’s pure gold. The one made by Tony Przygienda (of RIFT fame) on Valley-Free Routing post is so good and relevant that I decided to republish it as a separate blog post. Enjoy!

read more see 4 comments

Do We Need Leaf-and-Spine Fabrics?

Evil CCIE left a lengthy comment on one of my blog posts including this interesting observation:

It's always interesting to hear all kind of reasons from people to deploy CLOS fabrics in DC in Enterprise segment typically that I deal with while they mostly don't have clue about why they should be doing it in first place. […] Usually a good justification is DC to support high amount of East-West Traffic....but really? […] Ask them if they even have any benchmarks or tools to measure that in first place :)

What he wrote proves that most networking practitioners never move beyond regurgitating vendor marketing (because that’s so much easier than making the first step toward becoming an engineer by figuring out how technology really works).

read more see 3 comments

Traditional Leaf-and-Spine Fabric Versus Cisco ACI

One of my subscribers wondered whether it would make sense to build a traditional leaf-and-spine fabric or go for Cisco ACI. He started his email with:

One option is a "standalone" Spine/Leaf VXLAN-with EVPN deployment based on Nexus equipment. This approach could probably be accompanied by some kind of automation like Ansible to ease operation/maintenance of the network.

This is what I would do these days if the customer feels comfortable investing at least the minimum amount of work into an automation solution. Having simpler technology + well-understood automation solution is (in my biased opinion) better than having a complex black box.

read more see 9 comments

Updated: Networking Modules in Building Next-Generation Data Centers Online Course

We migrated the self-study materials for the network infrastructure and services module of the Building Next-Generation Data Centers online course into the new format, and split the largest module of the course into manageable chunks: data center fabrics 101, designing leaf-and-spine fabrics, overlay virtual networking, IPv6 and network services.

Feedback on the new format is obviously highly welcome. Thank you!

add comment
Sidebar