Category: data center
Intra-Spine Links in Leaf-and-Spine Fabrics
I had an interesting conversation with Doug Hanks (@douglashanksjr) about the need for intra-spine links in leaf-and-spine fabric designs. You clearly don’t need links between spine switches when every leaf node (switch or router/firewall/load balancer) is connected to all spine switches ... but what happens when one of the leaf-to-spine links fails? Will other leaf switches know that they have to avoid the spine switch with the failed link?
Nexus 6000 and 40GE – why do I care?
Cisco launched two new data center switches on Monday: Nexus 6001, a 1RU ToR switch with the exact same port configuration as any other ToR switch on the market (48 x 10GE, 4 x 40GE usable as 16 x 10GE) and Nexus 6004, a monster spine switch with 96 40GE ports (it has the same bandwidth as Arista’s 7508 in a 4RU form factor and three times as many 40GE ports as Dell Force10 Z9000).
Apart from slightly higher port density, Nexus 6001 looks almost like Nexus 5548 (which has 48 10GE ports) or Nexus 3064X. So where’s the beef?
Long-Distance vMotion, Stretched HA Clusters and Business Needs
During a recent vMotion-over-VXLAN discussion Chris Saunders made a very good point: “Folks should be asking a better question, like: Can I use VXLAN and vMotion together to meet my business requirements.”
Yeah, it’s always worth exploring the actual business needs.
Based on a true story ...
A while ago I was sitting in a roomful of extremely intelligent engineers working for a large data center company. Unfortunately they had been listening to a wrong group of virtualization consultants and ended up with the picture-perfect disaster-in-waiting: two data centers bridged together to support a stretched VMware HA cluster.
NEC Launched a Virtual OpenFlow Switch – Does It Matter?
On January 22nd NEC launched another component of their ProgrammableFlow architecture: a virtual switch for Hyper-V 3.0 environment. The obvious questions to ask would be: (a) why do we care and (b) how’s that different from Nicira or BigSwitch.
TL&DR summary: It depends.
How would you like to configure Policy-Based Routing (PBR)
Adam Sweeney, VP of EOS Engineering @ Arista Networks posed me a challenging question after my I-so-hate-PBR-CLI rant: “Is there something in particular that makes the IOS PBR CLI so painful? Is there a PBR CLI provided by any of the other systems out there that you like a lot better?”
My Twitter friends helped me find the answer to the second question: PBR in Junos is even more convoluted than it is in Cisco IOS... but what would be a better CLI?
Redundant Data Center Internet Connectivity – High-Level Design
Yesterday I described the roadblocks you might encounter when faced with a seemingly simple challenge:
In a network with two data centers (connected with a DCI link), ensure the applications in a data center stay reachable even if its Internet links fail.
In the Solutions Corner (a brand new part of my web site) you’ll find a short high-level design document describing the overall solution and listing the technologies you could use to implement it (you might want to watch the video before reading the document).
Redundant Data Center Internet Connectivity – Problem Overview
During one of my ExpertExpress consulting engagements I encountered an interesting challenge:
We have a network with two data centers (connected with a DCI link). How could we ensure the applications in a data center stay reachable even if all local Internet links fail?
On the face of it, the problem seems trivial; after all, you already have the DCI link in place, so what’s the big deal ... but we quickly figured out the problem is trickier than it seems.
Link Aggregation with Stackable Data Center Top-of-Rack Switches
Tomas Kubica made an interesting comment to my Stackable Data Center Switches blog post: “Suppose all your servers have 4x 10G port and you bundle them to LACP NIC team [...] With this stacking link is not going to be used for your inter-server traffic if all servers have active connections to all nodes of your ToR stack.” While he’s technically correct, the idea of having four 10GE ports on each server just to cater to the whims of stackable switches is somewhat hard to sell.
Hyper-V Network Virtualization (HNV/NVGRE): Simply Amazing
In August 2011, when NVGRE draft appeared mere days after VXLAN was launched, I dismissed it as “more of the same, different encapsulation, vague control plane”. Boy was I wrong … and pleasantly surprised when I figured out one of the major virtualization vendors actually did the right thing.
TL;DR Summary: Hyper-V Network Virtualization is a layer-3 virtual networking solution with centralized (orchestration system based) control plane. Its scaling properties are thus way better than VXLAN’s (or Nicira’s … unless they implemented L3 forwarding since the last time we spoke).
Who the **** needs 16 uplinks? Welcome to 10GE world!
Will made an interesting comment to my Stackable Data Center Switches article: “Who the heck has 16 uplinks?” Most of us do in the brave new 10GE world.
Large Leaf-and-Spine Fabrics with Dell Force10 Switches Using 10GE Uplinks
The second scenario Brad Hedlund described in the Clos Fabrics Explained webinar is a large leaf-and-spine fabric using 10GE uplinks and QSFP+ breakout cables between leaf and spine switches (thus increasing the number of spine switches to 16).
IPv6 Prefixes Longer Than /64 Might Be Harmful
A while ago I wrote a blog post about remote ND attacks, which included the idea of having /120 prefixes on server LANs. As it turns out, it was a bad idea, and as nosx pointed out in his comment: “there is quite a long list of caveats in all vendor camps regarding hardware in the last 6-8 years that has some potentially painful hardware issues regarding prefix length. Classic issues include ACL construction and TCAM specificity.”
One would hope that the newly-release data center switches fare better. Fat chance!
VXLAN Gateways
Mark Berly, the guest star of my VXLAN Technical Deep Dive webinar focused on VXLAN gateways. Here’s the first part of his presentation, explaining what VXLAN gateways are and where you’d need them.
Stackable Data Center Switches? Do the Math!
Imagine you have a typical 2-tier data center network (because 3-tier is so last millennium): layer-2 top-of-rack switches redundantly connected to a pair of core switches running MLAG (to get around spanning tree limitations) and IP forwarding between VLANs.
Next thing you know, a rep from your favorite vendor comes along and says: “did you know you could connect all ToR switches into a virtual fabric and manage them as a single entity?” Is that a good idea?
VXLAN Is Not a Data Center Interconnect Technology
In a comment to the Firewalls in a Small Private Cloud blog post I wrote “VXLAN is NOT a viable inter-DC solution” and Jason wasn’t exactly happy with my blanket response. I hope Jason got a detailed answer in the VXLAN Technical Deep Dive webinar, here’s a somewhat shorter explanation.