Category: design
Optimize Data Center Infrastructure: Go with 10GE
I published the third installment of the Optimize Your Data Center Infrastructure story on my main web site. In this part I’m telling you to go with 10GE and consider 25GE.
Two Switches Saga: Now in Text Format
Remember the All You Need Are Two Switches saga? Several readers told me they’d like to have in text (article) format, so I found a transcription service, and started editing what they produced and publishing it. The first two installments are already online.
On a related topic: we’ll discuss the viability of this approach in April DIGS event in Zurich, Switzerland.
Why Didn’t We Have Leaf-and-Spine Fabrics a Decade Ago?
One of my readers watched my Leaf-and-Spine Fabric Architectures webinar and had a follow-up question:
You mentioned 3-tier architecture was dictated primarily by port count and throughput limits. I can understand that port density was a problem, but can you elaborate why the throughput is also a limitation? Do you mean that core switch like 6500 also not suitable to build a 2-tier network in term of throughput?
As always, the short answer is it depends, in this case on your access port count and bandwidth requirements.
Worth Reading: Building an OpenStack Private Cloud
It’s uncommon to find an organization that succeeds in building a private OpenStack-based cloud. It’s extremely rare to find one that documented and published the whole process like Paddy Power Betfair did with their OpenStack Reference Architecture whitepaper.
I was delighted to see they decided to do a lot of things I was preaching for ages in blog posts, webinars, and lately in my Next Generation Data Center online course.
Highlights include:
Guest Speakers in the Building Next-Generation Data Center Course
I managed to get another awesome lineup of guest speakers for the Spring 2017 Building Next-Generation Data Center course (starting in less than a month):
Scott Lowe will open the course with a presentation on the impact of open source software in data center environments.
Leaf-and-Spine Fabrics versus Fabric Extenders
One of my readers wondered what the difference between fabric extenders and leaf-and-spine fabrics is:
We are building a new data center for DR and we management is wanting me to put in recommendations to either stick with our current Cisco 7k to 2k ToR FEX solution, or prepare for what seems to be the future of DC in that spine leaf architecture.
Let’s start with “what is leaf-and-spine architecture?”
Facebook Backpack Behind the Scenes
When Facebook announced 6-pack (their first chassis switch) my reaction was “meh” (as well as “I would love to hear what Brad Hedlund has to say about it”). When Facebook announced Backpack I mostly ignored the announcement. After all, when one of the cloud-scale unicorns starts talking about their infrastructure, what they tell you is usually low on detail and used primarily as talent attracting tool.
Q&A: Migrating to Modern Data Center Infrastructure
One of my readers sent me a list of questions after watching some of my videos, starting with a generic one:
While working self within large corporations for a long time, I am asking myself how it will be possible to move from messy infrastructure we grew over the years to a modern architecture.
Usually by building a parallel infrastructure and eventually retiring the old one, otherwise you’ll end up with layers of kludges. Obviously, the old infrastructure will lurk around for years (I know people who use this approach and currently run three generations of infrastructure).
The Unintended Consequences of NSSA Kludges
Remember the kludges needed to make OSPF NSSA areas work correctly? We concluded that saga by showing how the rules of RFC 3101 force a poor ASBR to choose an IP address on one of its OSPF-enabled interfaces as a forwarding address to be used in Type-7 LSA.
What could possibly go wrong with such a “simple” concept?
OSPF Forwarding Address YAK: Take 2
In my initial OSPF Forwarding Address blog post, I described a common Forwarding Address (FA) use case (at least as preached on the Internet): two ASBRs connected to a single external subnet with route redistributing configured only on one of them.
That design is clearly broken from the reliability perspective, but are there other designs where OSPF FA might make sense?
Never Take Two Chronometers to Sea
One of the quotes I found in the Mythical Man-Month came from the pre-GPS days: “never go to sea with two chronometers, take one or three”, and it’s amazing the networking industry (and a few others) never got the message.
Worth Reading: the Mythical Man-Month
I was discussing a totally unrelated topic with Terry Slattery when he mentioned a quote from the Mythical Man-Month. It got me curious, I started exploring and found out I can get the book as part of my Safari subscription.
Q&A: Building a Layer-2 Data Center Fabric in 2016
One of my readers designing a new data center fabric that has to provide L2 transport across the data center sent me this observation:
While we don’t have plans to seek an open solution in our DC we are considering ACI or VXLAN with EVPN. Our systems integrator partner expressed a view that VXLAN is still very new. Would you share that view?
Assuming he wants to stay with Cisco, what are the other options?
Building a L3-Only Data Center with Cumulus Linux
Dinesh Dutt was the guest speaker in the second Leaf-and-Spine Fabric Design session. After I explained how you can use ARP/ND information to build a layer-3-only data center fabric that still supports IP address mobility Dinesh described the details of Cumulus Linux redistribute ARP functionality and demoed how it works in a live data center.
Q&A: Ingress Traffic Flow in Multi-Data Center Deployments
One of my readers was watching the Building Active-Active Data Centers webinar and sent me this question:
I’m wondering if you have additional info on how to address the ingress traffic flow issue? The egress is well explained but the ingress issue wasn’t as well explained.
There’s a reason for that: there’s no good answer.