Category: design

Real-Life Data Center Meltdown

A good friend of mine who prefers to stay A. Nonymous for obvious reasons sent me his “how I lost my data center to a broadcast storm” story. Enjoy!


Small-ish data center with several hundred racks. Row of racks supported by an end-of-row stack. Each stack with 2 x L2 EtherChannels, one EC to each of 2 core switches. The inter-switch link details don’t matter other than to highlight “sprawling L2 domains."

VLAN pruning was used to limit L2 scope, but a few VLANs went everywhere, including the management VLAN.

read more see 3 comments

Decide How Badly You Want to Fail

Every time I’m running a data center-related workshop I inevitably get pulled into stretched VLAN and stretched clusters discussion. While I always tell the attendees what the right way of doing this is, and explain the challenges of stretched VLANs from all perspectives (application, database, storage, routing, and broadcast domains) the sad truth is that sometimes there’s nothing you can do.

You’ll find a generic version of that explanation in Building Active-Active and Disaster Recovery Data Centers webinar.

In those sad cases, I can give the workshop attendees only one advice: face the reality, and figure out how badly you might fail. It’s useless pretending that you won’t get into a split-brain scenario - redundant equipment just makes it less likely unless you over-complicated it in which case adding redundancy reduces availability. It’s also useless pretending you won’t be facing a forwarding loop.

read more see 2 comments

Shifting Responsibility in Network Design and Operations

When I started working with Cisco routers in late 1980s all you could get were devices with a dozen or so ports, and CPU-based forwarding (marketers would call it software defined these days). Not surprisingly, many presentations in Cisco conferences (before they were called Networkers or Cisco Live) focused on good network design and split of functionality in core, aggregation (or distribution) and access layer.

What you got following those rules were stable and predictable networks. Not everyone would listen; some customers tried to be cheap and implement too many things on the same box… with predictable results (today they would be quick to blame vendor’s poor software quality).

read more add comment

Feedback: Data Center Interconnects Webinar

I got great feedback about the first part of Data Center Interconnects webinar from one of ipSpace.net subscribers:

I had no specific expectation when I started watching the material and I must have watched it 6 times by now.

Your webinar covered just the right level of detail to educate myself or refresh my knowledge on the technologies and relevant options for today’s market choices

The information provided is powerful and avoids useless discussions which vendors and PowerPoint pitches. Once you ask the right question it’s easy to get an idea of the vendor readiness

In the first live session we covered the easy cases: design considerations, and layer-3 interconnect with path separation (multiple routing domains). The real fun will start in the second live session on March 19th when we’ll dive into stretched VLANs and long-distance vMotion ideas.

You can attend the live session with any paid ipSpace.net subscriptiondetails here.

add comment

More Thoughts on Vendor Lock-In and Subscriptions

Albert Siersema sent me his thoughts on lock-in and the recent tendency to sell network device (or software) subscriptions instead of boxes. A few of my comments are inline.

Another trend in the industry is to convert support contracts into subscriptions. That is, the entrenched players seem to be focusing more on that business model (too). In the end, I feel the customer won't reap that many benefits, and you probably will end up paying more. But that's my old grumpy cynicism talking :)

While I agree with that, buying a subscription instead of owning a box (and deprecating it) also makes it easier to persuade the bean counters to switch the gear because there’s little residual value in existing boxes (and it’s easy to demonstrate total-cost-of-ownership). Like every decent sword this one has two blades ;)

read more see 1 comments

Q-in-Q Support in Multi-Site EVPN

One of my subscribers sent me a question along these lines (heavily abridged):

My customer runs a colocation business and has to provide L2 connectivity between racks, sometimes even across multiple data centers. They were using Q-in-Q to deliver that in a traditional fabric and would like to replace that with multi-site EVPN fabric with ~100 ToR switches in each data center. However, Cisco doesn’t support Q-in-Q with multi-site EVPN. Any ideas?

As Lukas Krattiger explained in his part of Multi-Site Leaf-and-Spine Fabrics section of Leaf-and-Spine Fabric Architectures webinar, multi-site EVPN (VXLAN-to-VXLAN bridging) is hard. Don’t expect miracles like Q-in-Q over VNI any time soon ;)

read more see 4 comments

BGP as High Availability Protocol

Every now and then someone tells me I should write more about the basic networking concepts like I did years ago when I started blogging. I’m probably too old (and too grumpy) for that, but fortunately I’m no longer on my own.

Over the years ipSpace.net slowly grew into a small community of networking experts, and we got to a point where you’ll see regular blog posts from other community members, starting with Using BGP as High-Availability protocol written by Nicola Modena, member of ExpertExpress team.

add comment

Odd Number of Spines in Leaf-and-Spine Fabrics

In the market overview section of the introductory part of data center fabric architectures webinar I made a recommendation to use larger number of fixed-configuration spine switches instead of two chassis-based spines when building a medium-sized leaf-and-spine fabric, and explained the reasoning behind it (increased availability, reduced impact of spine failure).

One of the attendees wondered about the “right” number of spine switches – does it has to be four, or could you have three or five spines. In his words:

read more see 7 comments

Leaf-and-Spine Fabric Myths (Part 2)

The next set of Leaf-and-Spine Fabric Myths listed by Evil CCIE focused on BGP:

BGP is the best choice for leaf-and-spine fabrics.

I wrote about this particular one here. If you’re not a BGP guru don’t overcomplicate your network. OSPF, IS-IS, and EIGRP are good enough for most environments. Also, don’t ever turn BGP into RIP with AS-path length serving as hop count.

read more see 4 comments

Don't Make a Total Mess When Dealing with Exceptions

A while ago I had the dubious “privilege” of observing how my “beloved” airline Adria Airways deals with exceptions. A third-party incoming flight was 2.5 hours late and in their infinite wisdom (most probably to avoid financial impact) they decided to delay a half-dozen outgoing flights for 20-30 minutes while waiting for the transfer passengers.

Not surprisingly, when that weird thingy landed and they started boarding the outgoing flights (now all at the same time), the result was a total mess with busses blocking each other (this same airline loves to avoid jet bridges).

read more see 1 comments

Implications of Valley-Free Routing in Data Center Fabrics

As I explained in a previous blog post, most leaf-and-spine best-practices (as in: what to do if you have no clue) use BGP as the IGP routing protocol (regardless of whether it’s needed) with the same AS number shared across all spine switches to implement valley-free routing.

This design has an interesting consequence: when a link between a leaf and a spine switch fails, they can no longer communicate.

read more see 14 comments
Sidebar