Category: data center

Interview: Active-Active Data Centers With VXLAN and EVPN

Christoph Jaggi asked me a few questions about using VXLAN with EVPN to build data center fabrics and interconnects (including active/active data centers). The German version was published on Inside-IT; here’s the English version.

He started with an obvious one:

What is an active-active data center, and why would I want to use it?

Numerous organizations have multiple data centers for load sharing or disaster recovery purposes. They could use one of their data centers and have the other(s) as warm or cold standby (active/backup setup) or use all data centers at the same time (active/active).

read more see 3 comments

Using MPLS+EVPN in Data Center Fabrics

Here’s a question I got from someone attending the Building Next-Generation Data Center online course:

Cisco NCS5000 is positioned as a building block for a data center MPLS fabric – a leaf-and-spine fabric with MPLS and EVPN control plane. This raised a question regarding MPLS vs VXLAN: why would one choose to build an MPLS-based fabric instead of a VXLAN-based one assuming hardware costs are similar?

There’s a fundamental difference between MPLS- and VXLAN-based transport: the amount of coupling between edge and core devices.

read more see 11 comments

VMware NSX: The Good, the Bad and the Ugly

After four live sessions we finished the VMware NSX Technical Deep Dive webinar yesterday. Still have to edit the materials, but right now the whole thing is already over 6 hours long, and there are two more guest speaker sessions to come.

Anyways, in the previous sessions we covered all the good parts of NSX and a few of the bad ones. Everything that was left for yesterday were the ugly parts.

read more see 6 comments

Automation Win: Configure Cisco ACI with an Ansible Playbook

This blog post was initially sent to subscribers of my mailing list. Subscribe here.

Following on his previous work with Cisco ACI Dirk Feldhaus decided to create an Ansible playbook that would create and configure a new tenant and provision a vSRX firewall for the tenant when working on the Create Network Services hands-on exercise in the Building Network Automation Solutions online course.

read more see 1 comments

Leaf-and-Spine Fabric Myths (Part 3)

Evil CCIE concluded his long list of leaf-and-spine fabric myths (more in part 1 and part 2) with a layer-2 fabric myth:

Layer 2 Fabrics can't be extended beyond 2 Spine switches. I had a long argument with a $vendor guys on this. They don't even count SPB as Layer 2 fabric and so forth.

The root cause of this myth is the lack of understanding of what layer-2, layer-3, bridging and routing means. You might want to revisit a few of my very old blog posts before moving on: part 1, part 2, what is switching, layer-3 switches and routers.

read more see 4 comments

Leaf-and-Spine Fabric Myths (Part 2)

The next set of Leaf-and-Spine Fabric Myths listed by Evil CCIE focused on BGP:

BGP is the best choice for leaf-and-spine fabrics.

I wrote about this particular one here. If you’re not a BGP guru don’t overcomplicate your network. OSPF, IS-IS, and EIGRP are good enough for most environments. Also, don’t ever turn BGP into RIP with AS-path length serving as hop count.

read more see 4 comments

VXLAN and EVPN on Hypervisor Hosts

One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:

I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.

Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.

read more see 5 comments

Leaf-and-Spine Fabric Myths (Part 1)

Apart from the “they have no clue what they’re talking about” observation, Evil CCIE left a long list of leaf-and-spine fabric myths he encountered in the wild in a comment on one of my blog posts. He started with:

Clos fabric (aka Leaf And Spine fabric) is a non-blocking fabric

That was obviously true in the days when Mr. Clos designed the voice switching solution that still bears his name. In the original Clos network every voice call would get a dedicated path across the fabric, and the number of voice calls supported by the fabric equaled the number of alternate end-to-end paths.

read more see 2 comments

Network Troubleshooting Guidelines

It all started with an interesting weird MLAG bugs discussion during our last Building Next-Generation Data Center online course. The discussion almost devolved into “when in doubt reload” yammering when Mark Horsfield stepped in saying “while that may be true, make sure to check and collect these things before reloading”.

I loved what he wrote so much that I asked him to turn it into a blog post… and he made it even better by expanding it into generic network troubleshooting guidelines. Enjoy!

read more see 4 comments

Implications of Valley-Free Routing in Data Center Fabrics

As I explained in a previous blog post, most leaf-and-spine best-practices (as in: what to do if you have no clue) use BGP as the IGP routing protocol (regardless of whether it’s needed) with the same AS number shared across all spine switches to implement valley-free routing.

This design has an interesting consequence: when a link between a leaf and a spine switch fails, they can no longer communicate.

read more see 14 comments

VXLAN Broadcast Domain Size Limitations

One of the attendees of my Building Next-Generation Data Center online course tried to figure out whether you can build larger broadcast domains with VXLAN than you could with VLANs. Here’s what he sent me:

I’m trying to understand differences or similarities between VLAN and VXLAN technologies in a view of (*cast) domain limitation.

There’s no difference between the two on the client-facing side. VXLAN is just an encapsulation technology and doesn’t change how bridging works at all (read also part 2 of that story).

read more see 3 comments

Routing in Data Center: What Problem Are You Trying to Solve?

Here’s a question I got from an attendee of my Building Next-Generation Data Center online course:

As far as I understood […] it is obsolete nowadays to build a new DC fabric with routing on the host using BGP, the proper way to go is to use IGP + SDN overlay. Is my understanding correct?

Ignoring for the moment the fact that nothing is ever obsolete in IT, the right answer is it depends… this time on answer(s) to two seemingly simple questions “what services are we offering?” and “what connectivity problem are we trying to solve?”.

read more see 4 comments

Repost: Tony Przygienda on Valley-Free (or Non-ZigZag) Routing

Most blog posts generate the usual noise from the anonymous peanut gallery (if only they’d have at least a sliver of Statler and Waldorf in them), but every now and then there’s a comment that’s pure gold. The one made by Tony Przygienda (of RIFT fame) on Valley-Free Routing post is so good and relevant that I decided to republish it as a separate blog post. Enjoy!

read more see 4 comments
Sidebar