Category: Data Center
NEC ProgrammableFlow Principles - Q & A
The ProgrammableFlow Principles of Operations section of the ProgrammableFlow Technical Deep Dive webinar generated tens of questions from the audience – it took us almost 20 minutes to answer all of them (note: you might watch the answers after watching the section that triggered the questions).
NEC ProgrammableFlow Principles of Operation
The second part of ProgrammableFlow Technical Deep Dive webinar describes ProgrammableFlow principles – physical networks (control, data, HA cluster, management), edge and core switches, topology discovery and end-to-end routing and path setup procedure.
The Saga of Oversubscriptions
Matt Thompson provided a really good answer to the “what’s acceptable oversubscription ratio in a ToR switch” when he wrote “I’m expecting a ‘how long is a piece of string’ answer” (note: do watch the BBC video answering that one).
There’s the 3:1 rule-of-thumb recipe, with a more realistic answer being “it depends”. Now let’s see if we can go beyond that without a deep dive into scholastic waters.
Intra-Spine Links in Leaf-and-Spine Fabrics
I had an interesting conversation with Doug Hanks (@douglashanksjr) about the need for intra-spine links in leaf-and-spine fabric designs. You clearly don’t need links between spine switches when every leaf node (switch or router/firewall/load balancer) is connected to all spine switches ... but what happens when one of the leaf-to-spine links fails? Will other leaf switches know that they have to avoid the spine switch with the failed link?
Nexus 6000 and 40GE – why do I care?
Cisco launched two new data center switches on Monday: Nexus 6001, a 1RU ToR switch with the exact same port configuration as any other ToR switch on the market (48 x 10GE, 4 x 40GE usable as 16 x 10GE) and Nexus 6004, a monster spine switch with 96 40GE ports (it has the same bandwidth as Arista’s 7508 in a 4RU form factor and three times as many 40GE ports as Dell Force10 Z9000).
Apart from slightly higher port density, Nexus 6001 looks almost like Nexus 5548 (which has 48 10GE ports) or Nexus 3064X. So where’s the beef?
Long-Distance vMotion, Stretched HA Clusters and Business Needs
During a recent vMotion-over-VXLAN discussion Chris Saunders made a very good point: “Folks should be asking a better question, like: Can I use VXLAN and vMotion together to meet my business requirements.”
Yeah, it’s always worth exploring the actual business needs.
Based on a true story ...
A while ago I was sitting in a roomful of extremely intelligent engineers working for a large data center company. Unfortunately they had been listening to a wrong group of virtualization consultants and ended up with the picture-perfect disaster-in-waiting: two data centers bridged together to support a stretched VMware HA cluster.
NEC Launched a Virtual OpenFlow Switch – Does It Matter?
On January 22nd NEC launched another component of their ProgrammableFlow architecture: a virtual switch for Hyper-V 3.0 environment. The obvious questions to ask would be: (a) why do we care and (b) how’s that different from Nicira or BigSwitch.
TL&DR summary: It depends.
How would you like to configure Policy-Based Routing (PBR)
Adam Sweeney, VP of EOS Engineering @ Arista Networks posed me a challenging question after my I-so-hate-PBR-CLI rant: “Is there something in particular that makes the IOS PBR CLI so painful? Is there a PBR CLI provided by any of the other systems out there that you like a lot better?”
My Twitter friends helped me find the answer to the second question: PBR in Junos is even more convoluted than it is in Cisco IOS... but what would be a better CLI?
Redundant Data Center Internet Connectivity – High-Level Design
Yesterday I described the roadblocks you might encounter when faced with a seemingly simple challenge:
In a network with two data centers (connected with a DCI link), ensure the applications in a data center stay reachable even if its Internet links fail.
In the Solutions Corner (a brand new part of my web site) you’ll find a short high-level design document describing the overall solution and listing the technologies you could use to implement it (you might want to watch the video before reading the document).
Redundant Data Center Internet Connectivity – Problem Overview
During one of my ExpertExpress consulting engagements I encountered an interesting challenge:
We have a network with two data centers (connected with a DCI link). How could we ensure the applications in a data center stay reachable even if all local Internet links fail?
On the face of it, the problem seems trivial; after all, you already have the DCI link in place, so what’s the big deal ... but we quickly figured out the problem is trickier than it seems.