Category: data center
Solving the Problem in the Right Place
Sometimes I have this weird feeling that I’m the only loony in town desperately preaching against the stupidities heaped upon infrastructure, so it’s really nice when I find a fellow lost soul. This is what another senior networking engineer sent me:
I'm belonging to a small group of people who are thinking that the source of the problem are the apps and the associated business/security rules: their nature, their complexity, their lifecycle...
Sounds familiar (I probably wrote a few blog posts on this topic in the past), and it only got better.
Networking Trends Discussion with Andrew Lerner and Simon Richard: Part 2
In June 2017, we concluded the Building Next Generation Data Center online course with a roundtable discussion with Andrew Lerner, Research Vice President, Networking, and Simon Richard, Research Director, Data Center Networking @ Gartner.
In the second half of our discussion (first half is here) we focused on these topics:
Update: Arista Data Center Switches
In the past 5+ years I ran at least one Data Center Fabrics Update webinar per year to cover new hardware and software launched by data center switching vendors.
The rate of product and feature launches in data center switching market is slowing down, so I decided to insert the information on new hardware and software features launched in 2017 directly into the merged videos describing the progress various vendors made in the last years.
First in line: Arista EOS. You can access the videos if you bought the webinar recording in the past or if you have an active ipSpace.net subscription.
… updated on Tuesday, November 2, 2021 15:57 UTC
Redundancy Does Not Result in Resiliency
A while ago a large airline had a bad-hair day claiming it was caused by a faulty power supply. Not surprisingly, I got a question along the lines of “is that feasible?”
Short answer: Yes. However, someone should be really worried if that wasn’t made up.
Are VXLAN-Based Large Layer-2 Domains Safer?
One of my readers was wondering about the stability and scalability of large layer-2 domains implemented with VXLAN. He wrote:
If common BUM traffic (e.g. ARP) is being handled/localized by the network (e.g. NSX or ACI), and if we are managing what traffic hosts can send with micro-segmentation style filtering blocking broadcast/multicast, are large layer-2 domains still a recipe for disaster?
There are three major (fundamental) problems with large L2 domains:
Networking Trends Discussion with Andrew Lerner and Simon Richard
In June 2017, we concluded the Building Next Generation Data Center online course with a roundtable discussion with Andrew Lerner, Research Vice President, Networking, and Simon Richard, Research Director, Data Center Networking @ Gartner.
During the first 45 minutes, we covered a lot of topics including:
Video: Building Data Center Fabrics with SPB
There are two reasonable ways of building a layer-2 leaf-and-spine fabric: use VXLAN (the direction almost everyone in the industry is taking at the moment), or routing-on-layer-2 technology like TRILL or SPB.
Optimize Data Center Infrastructure: Virtualize Network Services
We’re almost done with our data center infrastructure optimization journey. In this step, we’ll virtualize the network services.
Swimlanes, Read-Write Transactions and Session State
Another question from someone watching my Designing Active-Active and Disaster Recovery Data Centers webinar (you know, the one where I tell people how to avoid the world-spanning-layer-2 madness):
In the video about parallel application stacks (swimlanes) you mentioned that one of the options for using the R/W database in Datacenter A if the user traffic landed in Datacenter B in which the replica of the database is read-only was to redirect the user browser with the purpose that the follow up HTTP POST land in Datacenter A.
Here’s the diagram he’s referring to:
Optimize Data Center Infrastructure: Use Distributed File System
Another part of my data center infrastructure optimization presentation is transcribed, edited and published: use distributed file system (at least for VM disk images).
Leaf-and-Spine Fabrics: Implicit or Explicit Complexity?
During Shawn Zandi’s presentation describing large-scale leaf-and-spine fabrics I got into an interesting conversation with an attendee that claimed it might be simpler to replace parts of a large fabric with large chassis switches (largest boxes offered by multiple vendors support up to 576 40GE or even 100GE ports).
As always, you have to decide between implicit and explicit complexity.
Cisco ACI, VMware NSX and Programmability
One of my readers sent me a lengthy email describing his NSX-versus-ACI views. He started with [slightly reworded]:
What I want to do is to create customer templates to speed up deployment of application environments, as it takes too long at the moment to set up a new application environment.
That’s what we all want. How you get there is the interesting part.
Optimize Data Center Infrastructure: Reduce the Number of Uplinks
The work of editing transcripts of my two switches presentation is (very slowly) moving forward. In the fourth part of the Optimize Your Data Center Infrastructure series I’m talking about reducing the number of uplinks.
Failure Is Inevitable – Deal with It!
Last week a large European financial institution had a bad hair day. My friend Christoph Jaggi asked for my opinion, and I decided not to focus on the specific problem (that’s what post-mortems are for) but to point out something that’s often forgotten: don’t believe your system won’t fail, be prepared to deal with the failure.
What is VxRail?
One of my readers was considering Dell/EMC hyperconverged solutions and sent me this question:
Just wondering if you have a chance to check out VxRail.
I read the data sheet and spec sheet, but have never seen anyone using it (any real-life experience highly welcome – please write a comment).