Category: workshop
NEC ProgrammableFlow Scalability Features
Once you get rid of spanning tree and associated kludges (not too hard in OpenFlow-based networks), BUM flooding becomes your biggest enemy. NEC’s engineers implemented some interesting features in the ProgrammableFlow switches and controllers: rate-limiting of unknown unicast frames, flooding control, and ARP snooping (if only they’d go for ARP proxy).
Hot and Cold VM Mobility
Another day, another interesting Expert Express engagement, another stretched layer-2 design solving the usual requirement: “We need inter-DC VM mobility.”
The usual question: “And why would you want to vMotion a VM between data centers?” with a refreshing answer: “Oh, no, that would not work for us.”
WAN Routing in Data Centers with Layer-2 DCI
A while ago I got an interesting question:
Let's say that due to circumstances outside of your control, you must have stretched data center subnets... What is the best method to get these subnets into OSPF? Should they share a common area at each data center or should each data center utilize a separate area for the same subnet?
Assuming someone hasn’t sprinkled the application willy-nilly across the two data centers, it’s best if the data center edge routers advertise subnets used by the applications as type-2 external routes, ensuring one data center is always the primary entry point for a specific subnet. Getting the same results with BGP routing in Internet is a much tougher challenge.
NEC ProgrammableFlow Principles - Q & A
The ProgrammableFlow Principles of Operations section of the ProgrammableFlow Technical Deep Dive webinar generated tens of questions from the audience – it took us almost 20 minutes to answer all of them (note: you might watch the answers after watching the section that triggered the questions).
NEC ProgrammableFlow Principles of Operation
The second part of ProgrammableFlow Technical Deep Dive webinar describes ProgrammableFlow principles – physical networks (control, data, HA cluster, management), edge and core switches, topology discovery and end-to-end routing and path setup procedure.
The Saga of Oversubscriptions
Matt Thompson provided a really good answer to the “what’s acceptable oversubscription ratio in a ToR switch” when he wrote “I’m expecting a ‘how long is a piece of string’ answer” (note: do watch the BBC video answering that one).
There’s the 3:1 rule-of-thumb recipe, with a more realistic answer being “it depends”. Now let’s see if we can go beyond that without a deep dive into scholastic waters.
VXLAN Gateway Design Guidelines
Mark Berly spent plenty of time explaining the in-depth intricacies of VXLAN-to-VLAN gateways during our VXLAN Technical Deep Dive webinar. He’s obviously heavily immersed in this challenge and hits 9+ on the Nerd Meter, so you might have to watch the video a few times to get all the nuances. What can I say – we’ll have fun times in the coming years ;)
ProgrammableFlow Webinar – Introduction
Here’s the first video from the ProgrammableFlow webinar. Don’t get too excited, it’s yet another introduction to OpenFlow and control/data plane separation (but it is an animated one ;). More goodies to follow ...
Intra-Spine Links in Leaf-and-Spine Fabrics
I had an interesting conversation with Doug Hanks (@douglashanksjr) about the need for intra-spine links in leaf-and-spine fabric designs. You clearly don’t need links between spine switches when every leaf node (switch or router/firewall/load balancer) is connected to all spine switches ... but what happens when one of the leaf-to-spine links fails? Will other leaf switches know that they have to avoid the spine switch with the failed link?
Nexus 6000 and 40GE – why do I care?
Cisco launched two new data center switches on Monday: Nexus 6001, a 1RU ToR switch with the exact same port configuration as any other ToR switch on the market (48 x 10GE, 4 x 40GE usable as 16 x 10GE) and Nexus 6004, a monster spine switch with 96 40GE ports (it has the same bandwidth as Arista’s 7508 in a 4RU form factor and three times as many 40GE ports as Dell Force10 Z9000).
Apart from slightly higher port density, Nexus 6001 looks almost like Nexus 5548 (which has 48 10GE ports) or Nexus 3064X. So where’s the beef?
IPv6 addressing in SMB environment
Martin Bernier has decided to open another can of IPv6 worms: how do you address multiple subnets in a very typical setup where you use a firewall (example: ASA) to connect a SMB network to the outside world?
NEC Launched a Virtual OpenFlow Switch – Does It Matter?
On January 22nd NEC launched another component of their ProgrammableFlow architecture: a virtual switch for Hyper-V 3.0 environment. The obvious questions to ask would be: (a) why do we care and (b) how’s that different from Nicira or BigSwitch.
TL&DR summary: It depends.
Redundant Data Center Internet Connectivity – High-Level Design
Yesterday I described the roadblocks you might encounter when faced with a seemingly simple challenge:
In a network with two data centers (connected with a DCI link), ensure the applications in a data center stay reachable even if its Internet links fail.
In the Solutions Corner (a brand new part of my web site) you’ll find a short high-level design document describing the overall solution and listing the technologies you could use to implement it (you might want to watch the video before reading the document).
Redundant Data Center Internet Connectivity – Problem Overview
During one of my ExpertExpress consulting engagements I encountered an interesting challenge:
We have a network with two data centers (connected with a DCI link). How could we ensure the applications in a data center stay reachable even if all local Internet links fail?
On the face of it, the problem seems trivial; after all, you already have the DCI link in place, so what’s the big deal ... but we quickly figured out the problem is trickier than it seems.
Link Aggregation with Stackable Data Center Top-of-Rack Switches
Tomas Kubica made an interesting comment to my Stackable Data Center Switches blog post: “Suppose all your servers have 4x 10G port and you bundle them to LACP NIC team [...] With this stacking link is not going to be used for your inter-server traffic if all servers have active connections to all nodes of your ToR stack.” While he’s technically correct, the idea of having four 10GE ports on each server just to cater to the whims of stackable switches is somewhat hard to sell.