Category: data center

Response: The Usability of VXLAN

Wes made an interesting comment to the Migrating a Data Center Fabric to VXLAN blog post:

The benefit of VXLAN is mostly scalability, so if your enterprise network is not scaling… just don’t. The migration path from VLANs is to just keep using VLANs. The (vendor-driven) networking industry has a huge blind spot about this.

Paraphrasing the famous Dinesh Dutt’s Autocon1 remark: I couldn’t disagree with you more.

read more see 2 comments

Migrating a Data Center Fabric to VXLAN

Darko Petrovic made an excellent remark on one of my LinkedIn posts:

The majority of the networks running now in the Enterprise are on traditional VLANs, and the migration paths are limited. Really limited. How will a business transition from traditional to whatever is next?

The only sane choice I found so far in the data center environment (and I know it has been embraced by many organizations facing that conundrum) is to build a parallel fabric (preferably when the organization is doing a server refresh) and connect the new fabric with the old one with a layer-3 link (in the ideal world) or an MLAG link bundle.

read more see 2 comments

The Mythical Use Cases: Traffic Engineering for Data Center Backups

Vendor product managers love discussing mythical use cases to warrant complex functionality in their gear. Long-distance VM mobility was one of those (using it for disaster avoidance was Mission Impossible under any real-world assumptions), and high-volume network-based backups seems to be another. Here’s what someone had to say about that particular unicorn in a LinkedIn comment when discussing whether we need traffic engineering in a data center fabric.

When you’re dealing with a large cluster on a fabric, you will see things like inband backup. The most common one I’ve seen is VEEAM. Those inband backups can flood a single link, and no amount of link scheduling really solves that; depending on the source, they can saturate 100G. There are a couple of solutions; IPv6 or eBGP SID has been used to avoid these links or schedule avoidance for other traffic.

It is true that (A) in-band backups can be bandwidth intensive and that (B) well-written applications can saturate 100G server links. However:

read more see 1 comments

Video: What Is Software-Defined Data Center

A few years ago, I was asked to deliver a What Is SDDC presentation that later became a webinar. I forgot about that webinar until I received feedback from one of the viewers a week ago:

If you like to learn from the teachers with the “straight to the point” approach and complement the theory with many “real-life” scenarios, then ipSpace.net is the right place for you.

I haven’t realized people still find that webinar useful, so let’s make it viewable without registration, starting with What Problem Are We Trying to Solve and What Is SDDC.

You need at least free ipSpace.net subscription to watch videos in this webinar.
add comment

Path Failure Detection on Multi-Homed Servers

TL&DR: Installing an Ethernet NIC with two uplinks in a server is easy1. Connecting those uplinks to two edge switches is common sense2. Detecting physical link failure is trivial in Gigabit Ethernet world. Deciding between two independent uplinks or a link aggregation group is interesting. Detecting path failure and disabling the useless uplink that causes traffic blackholing is a living hell (more details in this Design Clinic question).

Want to know more? Let’s dive into the gory details.

read more see 5 comments

Video: Packet Buffers in Data Center ASICs

A few years ago, we were fortunate enough to have Pete Lumbis talking about ASICs for Networking Engineers as part of the Data Center Fabric Architectures webinar.

One of the topics he couldn’t possibly skip was the question of how many packet buffers one needs in a data center switch.

If you want even more details, watch the Networks, Buffers, and Drops webinar.
add comment

Is It Time to Replace TCP in Data Centers?

One of my readers asked for my opinion about the provocative “It’s Time to Replace TCP in the Datacenter” article by prof. John Ousterhout. I started reading it, found too many things that didn’t make sense, and decided to ignore it as another attempt of a proverbial physicist solving hard problems in someone else’s field.

However, pointers to that article kept popping up, and I eventually realized it was a position paper in a long-term process that included conference talks, interviews and keynote speeches, so I decided to take another look at the technical details.

read more see 7 comments

EVPN/VXLAN or Bridged Data Center Fabric?

An attendee in the Building Next-Generation Data Center online course sent me an interesting dilemma:

Some customers don’t like EVPN because of complexity (it is required knowledge BGP, symmetric/asymmetric IRB, ARP suppression, VRF, RT/RD, etc). They agree, that EVPN gives more stability and broadcast traffic optimization, but still, it will not save DC from broadcast storms, because protections methods are the same for both solutions (minimize L2 segments, storm-control).

We’ll deal with the unnecessary EVPN-induced complexity some other time, today let’s start with a few intro-level details.

read more see 3 comments

VXLAN-to-VXLAN Bridging in DCI Environments

Almost exactly a decade ago I wrote that VXLAN isn’t a data center interconnect technology. That’s still true, but you can make it a bit better with EVPN – at the very minimum you’ll get an ARP proxy and anycast gateway. Even this combo does not address the other requirements I listed a decade ago, but maybe I’m too demanding and good enough works well enough.

However, there is one other bit that was missing from most VXLAN implementations: LAN-to-WAN VXLAN-to-VXLAN bridging. Sounds weird? Supposedly a picture is worth a thousand words, so here we go.

read more see 2 comments

Select the Best Switching ASIC For the Job

Last week I described some of the data center switching ASIC design tradeoffs and the ASIC families Broadcom created to fit somewhere in that multi-dimensional space.

Next step: how could you design your data center fabric to make the most out of them? To keep things simple, we’ll build a typical leaf-and-spine fabric with a WAN edge layer (sometimes called border leaf switches).

read more see 1 comments

Data Center Switching ASICs Tradeoffs

A brief mention of Broadcom ASIC families in the Networking Hardware/Software Disaggregation in 2022 blog post triggered an interesting discussion of ASIC features and where one should use different ASIC families.

Like so many things in life, ASIC design is all about tradeoffs. Usually you’re faced with a decision to either implement X (whatever X happens to be), or have high-performance product, or have a reasonably-priced product. It’s very hard to get two out of three, and getting all three is beyond Mission Impossible.

read more see 2 comments
Sidebar