Category: data center
Feedback: Data Center Interconnects Webinar
I got great feedback about the first part of Data Center Interconnects webinar from one of ipSpace.net subscribers:
I had no specific expectation when I started watching the material and I must have watched it 6 times by now.
Your webinar covered just the right level of detail to educate myself or refresh my knowledge on the technologies and relevant options for today’s market choices
The information provided is powerful and avoids useless discussions which vendors and PowerPoint pitches. Once you ask the right question it’s easy to get an idea of the vendor readiness
In the first live session we covered the easy cases: design considerations, and layer-3 interconnect with path separation (multiple routing domains). The real fun will start in the second live session on March 19th when we’ll dive into stretched VLANs and long-distance vMotion ideas.
You can attend the live session with any paid ipSpace.net subscription – details here.
Anyone Using Intel Omni-Path?
One of my subscribers sent me this question after watching the latest batch of Data Center Fabrics videos:
You haven’t mentioned Intel's Omni-Path at all. Should I be surprised?
While Omni-Path looks like a cool technology (at least at the whitepaper level), nobody ever mentioned it (or Intel) in any data center switching discussion I was involved in.
Private VLANs With VXLAN
I got this remark from a reader after he read the VXLAN and Q-in-Q blog post:
Another area with a feature gap with EVPN VXLAN is Private VLANs with VXLAN. They’re not supported on either Nexus or Juniper switches.
I have one word on using private VLANs in 2019: Don’t. They are messy and complicated to maintain (not to mention how exciting it gets to combine virtual and physical switches).
Worth Reading: MPLS and ExaBGP
Jon Langemak is on a writing spree: after completing his MPLS-on-Junos series he started a deep dive into ExaBGP. Well worth reading if you’re enjoying detailed technical blog posts.
Cross-Data-Center L4-7 Services With Cisco ACI
Craig Weinhold sent me his thoughts on using Cisco ACI to implement cross-data-center L4-7 services. While we both believe this is not the way to do things (because you should start with proper application architecture), you might find his insights useful if you have to deal with legacy environments that believe in Santa Claus and solving application problems with networking infrastructure.
An “easy button” for multi-DC is like the quest for the holy grail. I explain to my clients that the answer is right in front of them – local IP addressing, L3 routing, and DNS. But they refuse to accept that, draw their swords, and engage in a fruitless war against common sense. Asymmetry, stateful inspection, ingress routing, split-brain, quorums, host mobility, cache coherency, non-RFC complaint ARP, etc.
Loop Avoidance in VXLAN Networks
Antonio Boj sent me this interesting challenge:
Is there any way to avoid, prevent or at least mitigate bridging loops when using VXLAN with EVPN? Spanning-tree is not supported when using VXLAN encapsulation so I was hoping to use EVPN duplicate MAC detection.
MAC move dampening (or anything similar) doesn’t help if you have a forwarding loop. You might be able to use it to identify there’s a loop, but that’s it… and while you’re doing that your network is melting down.
Operating Cisco ACI the Right Way
This is a guest blog post by Andrea Dainese, senior network and security architect, and author of UNetLab (now EVE-NG) and Route Reflector Labs. These days you’ll find him busy automating Cisco ACI deployments.
In this post we’ll focus on a simple question that arises in numerous chats I have with colleagues and customers: how should a network engineer operate Cisco ACI? A lot of them don’t use any sort of network automation and manage their Cisco ACI deployments using the Web Interface. Is that good or evil? As you’ll see we have a definite answer and it’s not “it depends”.
To Centralize or not to Centralize, That’s the Question
One of the attendees of the Building Next-Generation Data Center online course solved the build small data center fabric challenge with Virtual Chassis Fabric (VCF). I pointed out that I would prefer not to use VCF as it uses centralized control plane and is thus a single failure domain.
Here are his arguments for using VCF:
Can I Replace a Commercial Load Balancer with HAProxy?
A networking engineer attending the Building Next-Generation Data Centers online course sent me this question:
My client will migrate their data center, so they’re not interested in upgrading existing $vendor load balancers. Would HAProxy be a good alternative?
As you might be facing a similar challenge, here’s what I told him:
Video: What Problem Are We Solving with SDDC?
Remember the Software-Defined Data Centers hype? While I covered SDDC concepts and technologies for years in my webinars and workshops, I never created an introductory webinar on the topic.
That omission has been fixed in late August – SDDC 101 webinar is available as part of free subscription, and as always I started with the seemingly simple question: What problem are we trying to solve?
Odd Number of Spines in Leaf-and-Spine Fabrics
In the market overview section of the introductory part of data center fabric architectures webinar I made a recommendation to use larger number of fixed-configuration spine switches instead of two chassis-based spines when building a medium-sized leaf-and-spine fabric, and explained the reasoning behind it (increased availability, reduced impact of spine failure).
One of the attendees wondered about the “right” number of spine switches – does it has to be four, or could you have three or five spines. In his words:
Interview: Active-Active Data Centers With VXLAN and EVPN
Christoph Jaggi asked me a few questions about using VXLAN with EVPN to build data center fabrics and interconnects (including active/active data centers). The German version was published on Inside-IT; here’s the English version.
He started with an obvious one:
What is an active-active data center, and why would I want to use it?
Numerous organizations have multiple data centers for load sharing or disaster recovery purposes. They could use one of their data centers and have the other(s) as warm or cold standby (active/backup setup) or use all data centers at the same time (active/active).
Using MPLS+EVPN in Data Center Fabrics
Here’s a question I got from someone attending the Building Next-Generation Data Center online course:
Cisco NCS5000 is positioned as a building block for a data center MPLS fabric – a leaf-and-spine fabric with MPLS and EVPN control plane. This raised a question regarding MPLS vs VXLAN: why would one choose to build an MPLS-based fabric instead of a VXLAN-based one assuming hardware costs are similar?
There’s a fundamental difference between MPLS- and VXLAN-based transport: the amount of coupling between edge and core devices.
Observability Is the New Black
In early October I had a chat with Dinesh Dutt discussing the outline of the webinar he’ll do in November. A few days later Fastly published a blog post on almost exactly the same topic. Coincidence? Probably… but it does seem like observability is the next emerging buzzword, and Dinesh will try to put it into perspective answering these questions:
VMware NSX: The Good, the Bad and the Ugly
After four live sessions we finished the VMware NSX Technical Deep Dive webinar yesterday. Still have to edit the materials, but right now the whole thing is already over 6 hours long, and there are two more guest speaker sessions to come.
Anyways, in the previous sessions we covered all the good parts of NSX and a few of the bad ones. Everything that was left for yesterday were the ugly parts.