Category: data center
Does CCIE still make sense?
A reader of my blog sent me this question:
I am a Telecommunication Engineer currently preparing for the CCIE exam. Do you think that in a near future it will be worth to be a CCIE, due to the recent developments like Nicira? What will be the future of Cisco IOS, and protocols like OSPF or BGP? I am totally disoriented about my career.
Well, although I wholeheartedly agree with recent post from Derick Winkworth, the sky is not falling (yet):
Edge Virtual Bridging (802.1Qbg) – a Technology Refusing to Die
I thought Edge Virtual Bridging (EVB) would be the technology transforming the kludgy vendor-specific VM-aware networking solutions into a properly designed architecture, but the launch of L2-over-IP solutions for VMware and Xen hypervisors is making EVB obsolete before it ever made it through the IEEE doors.
Updated Webinar Roadmaps
I finally found just the right set of tools to draw and update webinar roadmaps without too much hassle, and updated all of them to include the webinars developed during late 2011 and planned for 2012:
Microsoft Network Load Balancing Behind the Scenes
I figured out I wrote a lot about Microsoft Network Load Balancing (NLB) without ever explaining how that marvel of engineering works. To fix that omission, here’s a short video taken from the Data Center 3.0 webinar.
FIB Update Challenges in OpenFlow Networks
Last week I described the problems high-end service provider routers (or layer-3 switches if you prefer that terminology) face when they have to update large number of entries in the forwarding tables (FIBs). Will these problems go away when we introduce OpenFlow into our networks? Absolutely not, OpenFlow is just another mechanism to download forwarding entries (this time from an external controller) not a laws-of-physics-changing miracle.
VXLAN runs over UDP – does it matter?
Scott Lowe asked a very good question in his Technology Short Take #20:
VXLAN uses UDP for its encapsulation. What about dropped packets, lack of sequencing, etc., that is possible with UDP? What impact is that going to have on the “inner protocol” that’s wrapped inside the VXLAN UDP packets? Or is this not an issue in modern networks any longer?
Short answer: No problem.
… updated on Tuesday, November 17, 2020 16:49 UTC
IP Renumbering in Disaster Avoidance Data Center Designs
It’s hard for me to admit, but there just might be a corner use case for split subnets and inter-DC bridging: even if you move a cold VM between data centers in a controlled disaster avoidance process (moving live VMs rarely makes sense), you might not be able to change its IP address due to hard-coded IP addresses, be it in application code or configuration files.
Disaster recovery is a different beast: if you’ve lost the primary DC, it doesn’t hurt if you instantiate the same subnet in the backup DC.
Can We Really Ignore Spaghetti and Horseshoes?
Brad Hedlund wrote a thought-provoking article a few weeks ago, claiming that the horseshoes (or trombones) and spaghetti created by virtual workloads and appliances deployed anywhere in the network don’t matter much with new data center designs that behave like distributed switches. In theory, he’s right. In practice, less so.
Which virtual networking technology should I use?
After I published the Decouple virtual networking from the physical world article, @paulgear1 sent me a very valid tweet: “You seemed a little short on suggestions about the path forward. What should customers do right now?” Apart from the obvious “it depends”, these are the typical use cases (as I understand them today – please feel free to correct me).
FCoE and LAG – industry-wide violation of FC-BB-5?
Anyone serious about high-availability connects servers to the network with more than one uplink, more so when using converged network adapters (CNA) with FCoE. Losing all server connectivity after a single link failure simply doesn’t make sense.
If at all possible, you should use dynamic link aggregation with LACP to bundle the parallel server-to-switch links into a single aggregated link (also called bonded interface in Linux). In theory, it should be simple to combine FCoE with LAG – after all, FCoE runs on top of lossless Ethernet MAC service. In practice, there’s a huge difference between theory and practice.
Nexus vPC and Consistency Checker
Michel sent me a detailed e-mail describing both his enthusiasm with vPC and the headaches consistency checker is causing him. Here’s the good part:
Nexus vPC seems like a perfect solution for real multi-chassis etherchannel. At work we're using it extensively on a few pairs of Nexus 7000's.
... and then it turns sour:
However, there is one MAJOR drawback with vPC at this time, it's the way the consistency checker works (or rather, does not work). We've come across two specific situations where consistency checker will bring down your beautiful and redundant vPC link, and we've found no way around.
Here are his problems:
Busting Layer-2 Data Center Interconnect Myths
A few weeks ago I delivered a short L2 DCI WebEx presentation to CCIE Club Poland. I took the L2 part of my Data Center Interconnect webinar and added 15 minutes of L2 DCI mythbusting.
That part of my presentation is on YouTube; for the rest, watch my Data Center Interconnect webinar.
Follow-the-Sun Workload Mobility? Get Lost!
Based on what I wrote about the latency and bandwidth challenges of long-distance vMotion and why it rarely makes sense to use it in disaster avoidance scenarios, I was asked to write an article to tackle the idea that is an order of magnitude more ridiculous: using vMotion to migrate virtual machines around the world to bring them close to the users.
That article has disappeared a long time ago in the haze of mergers, acquisitions and SEO optimizations, so I’m reposting it here:
What Is OpenFlow (Part 2)?
Got this set of questions from a CCIE pondering emerging technologies that could be of potential use in his data center:
I don’t think OpenFlow is clearly defined yet. Is it a protocol? A model for Control plane – Forwarding plane FP interaction? An abstraction of the forwarding-plane? An automation technology? Is it a virtualization technology? I don’t think there is consensus on these things yet.
OpenFlow is very well defined. It’s a control plane (controller) – data plane (switch) protocol that allows control plane to:
VXLAN termination on physical devices
Every time I’m discussing the VXLAN technology with a fellow networking engineer, I inevitably get the question “how will I connect this to the outside world?” Let’s assume you want to build pretty typical 3-tier application architecture (next diagram) using VXLAN-based virtual subnets and you already have firewalls and load balancers – can you use them?
The product information in this blog post is outdated - Arista, Brocade, Cisco, Dell, F5, HP and Juniper are all shipping hardware VXLAN gateways (this post has more up-to-date information). The concepts explained in the following text are still valid; however, I would encourage you to read other VXLAN-related posts on this web site or watch the VXLAN webinar to get a more recent picture.