Category: data center
Some People Don’t Get It: It Will Eventually Fail
Mark Baker left this comment on my Stretched Firewalls across Layer-3 DCI blog post:
Strange how inter-DC clustering failure is considered a certainty in this blog.
Call it experience or exposure to a larger dataset. Anything you build will eventually fail; just because you haven’t experienced the failure yet doesn’t mean that the system will never fail but only that you were lucky so far.
First Guest Speaker in Building Next-Generation Data Center Course
When I started thinking about my first online course, I decided to create something special – it should be way more than me talking about cool new technologies and designs – and the guest speakers are a crucial part of that experience.
The first guest speaker is one of the gurus of network design and complexity, wrote numerous books on the topic, and recently worked on a hardware-independent network operating system.
Shortest Path Bridging (SPB) and Avaya Fabric on Software Gone Wild
A few months ago I met a number of great engineers from Avaya and they explained to me how they creatively use Shortest Path Bridging (SPB) to create layer-2, layer-3, L2VPN, L3VPN and even IP Multicast fabrics – it was clearly time for another deep dive into SPB.
It took me a while to meet again with Roger Lapuh, but finally we started exploring the intricacies of SPB, and even compared it to MPLS for engineers more familiar with MPLS/VPN. Interested? Listen to Episode 54 of Software Gone Wild.
Host-to-Network Multihoming Kludges
Continuing our routing-on-hosts discussions, Enno Rey (of the Troopers and IPv6 security fame) made another interesting remark “years ago we were so happy when we finally got rid of gated on Solaris” and I countered with “there are still people who fondly remember the days of running gated on Solaris” because it’s a nice solution to host-to-network multihoming problem.
New Experiment: Interactive Online Course
After I told you that I’m not going to Interop, I got numerous emails along the lines of “but I was really looking forward to attending your workshop” so I started looking for a solution that would combine the best of online and classroom worlds.
Here’s my first attempt: an interactive online course combining topics from two of my Interop workshops. I’m still working on the detailed agenda and plan to have it ready around May 1st. In the meantime, I’d really appreciate your feedback – leave a comment or send me an email.
Video: All You Need Are Two Switches
I’ve been telling you to build small-to-midsized data center with two switches for years ;) A few weeks ago I’ve turned the presentation I had on that topic into a webinar and the first video from that webinar (now part of Designing Private Cloud Infrastructure) is already public.
SDN and Whitebox Switches
Some people conflate SDN with whitebox switches preferably running Linux. So what exactly is software-hardware disaggregation, and how do whitebox switches and third-party network operating systems fit into the bigger picture?
I tried to answer these questions in the SDN is not whitebox switching part of (free) Introduction to SDN webinar.
Sysadmins Shouldn’t Be Involved with Routing
I had a great chat with Enno Rey the morning before Troopers 2016 started in which he he made an interesting remark:
I disagree with your idea of running BGP on servers because I think sysadmins shouldn’t be involved with routing.
As (almost) always, it turned out that we were still in violent agreement ;)
How Hard Is It to Think about Failures?
Mr. A. Anonymous, frequent contributor to my blog posts left this bit of wisdom comment on the VMware NSX Update blog post:
I don't understand the statement that "whole NSX domain remains a single failure domain" because the 3 NSX controllers are deployed in the site with primary NSX manager.
I admit I was a bit imprecise (wasn’t the first time), but is it really that hard to ask oneself “what happens if the DCI link fails?”
Table Sizes in OpenFlow Switches
This article was initially sent to my SDN mailing list. To register for SDN tips, updates, and special offers, click here.
Usman asked a few questions in his comment on my blog, including:
At the moment, local RIB gets downloaded to FIB and we get packet forwarding on a router. If we start evaluating too many fields (PBR) and (assume) are able to push these policies to the FIB - what would become of the FIB table size?
Short answer: It would explode ;)
Don’t Run OSPF with Your Customers
Salman left an interesting comment on my Running BGP on Servers blog post:
My prior counterparts thought running OSPF on Mainframes was a good idea. Then we had a routing blackhole due to misconfiguration on the server. Twice! The main issue was the Mainframe admins lack of networking/OSPF knowledge.
Well, there’s a reason OSPF is called Interior Routing Protocol.
Featured Webinar: Leaf-and-Spine Designs
The featured webinar in March 2016 is the Leaf-and-Spine Designs update to the Leaf-and-Spine Fabrics webinar, and in the featured videos (the ones marked with a star) you'll find in-depth explanation of BGP features available in Cumulus Linux, including a cool trick that allows you to run EBGP sessions across unnumbered interfaces.
Reader Comments: Spanning Tree Woes
My latest spanning tree protocol (STP) posts generated numerous comments, some of them so relevant that I decided to summarize them into another blog post.
Weird Things Happen
The unidirectional link scenario mentioned by Antonio is pretty well known:
How Realistic Is High-Density Virtualization?
A while ago I guestimated that most private clouds don’t have more than a few thousand VMs, and that they don’t need more bandwidth than what two ToR switches could provide.
Last autumn Iwan Rahabok published a blog post describing the compute- and storage parts of it, and I had a presentation describing the networking aspects of high-density consolidation. However…
Data Center Fabrics and SDN
A few days ago Inside-IT published an interview Christoph Jaggi did with me. In case you don’t understand German, here’s the English version of it.
There is a lot of talk about data center fabrics. What problem do they try to solve?
The data center fabrics are supposed to solve a simple-to-define problem: building a unified data center infrastructure that seamlessly supports data and storage communications. As always, the devil hides in the details.