Blog Posts in May 2014

What Are Linux Containers?

Everyone talks about Linux containers these days like they would be the hottest thing invented this spring. In reality, it’s a pretty old technology that was heavily used by some smart web hosting companies for years (but of course, some people think mentioning Google makes everything look sexier).

If you’re interested in a high-level overview of differences between Linux containers and more traditional virtual machines, watch the video from the Introduction to Virtual Networking webinar.

see 3 comments

It’s OK to Let Developers Go @ Amazon Web Services, but Not at Home? You Must Be Kidding!

Recently I was discussing the benefits and drawbacks of virtual appliances, software-defined data centers, and self-service approach to application deployment with a group of extremely smart networking engineers.

After the usual set of objections, someone said “but if we won’t become more flexible, the developers will simply go to Amazon. In fact, they already use Amazon Web Services.

read more see 5 comments

How Line-rate Is Line-rate?

During yesterday’s Data Center Fabrics Update presentation, one of the attendees sent me this question while I was talking about the Arista 7300 series switches:

Is the 7300 really non-blocking at all packet sizes? With only 2 x Trident-2 per line card it can't support non-blocking for small packets based on Trident-2 architecture.

It was an obvious example of vendor bickering, so I ignored the question during the presentation, but it still intrigued me, so I decided to do some more research.

read more see 7 comments

Queuing Mechanisms in Modern Switches

A long while ago there was an interesting discussion started by Brad Hedlund (then at Dell Force10) comparing leaf-and-spine (Clos) fabrics built from fixed-configuration pizza box switches with high-end chassis switches. The comments made by other readers were all over the place (addressing pricing, wiring, power consumption) but surprisingly nobody addressed the queuing issues.

This blog post focuses on queuing mechanisms available within a switch; the next one will address end-to-end queuing issues in leaf-and-spine fabrics.

read more see 5 comments

All Operations Engineers Should Have Firefighting Training

Recently I had a fantastic conversation with Erich Hohermuth, a networking engineer with an unusual hobby: he’s a professional firefighting instructor (teaching firefighters across the country how to do their job).

Volunteer fire departments are pretty popular in Central European countries, and so he’s not the only one on his team with that skillset. The (not so unexpected) side effect: these people are the best ones when it comes to fighting IT disasters.

read more add comment

Load Balancing Across IP Subnets

One of my readers sent me this question:

I have a data center with huge L2 domains. I would like to move routing down to the top of the rack, however I’m stuck with a load-balancing question: how do load-balancers work if you have routed network and pool members that are multiple hops away? How is that possible to use with Direct Return?

There are multiple ways to make load balancers work across multiple subnets:

read more see 6 comments

Optimizing OpenFlow Hardware Tables

Initial OpenFlow hardware implementations used a simplistic approach: install all OpenFlow entries in TCAM (the hardware that’s used to implement ACLs and PBR) and hope for the best.

That approach was good enough to get you a tick-in-the-box on RFP responses, but it fails miserably when you try to get OpenFlow working in a reasonably sized network. On the other hand, many problems people try to solve with OpenFlow, like data center fabrics, involve simple destination-only L2 or L3 switching.

read more see 2 comments

Is OpenFlow Useful?

The Does Centralized Control Plane Make Sense post triggered several comments along the lines of “do you think there’s no need for OpenFlow?

TL;DR version: OpenFlow is just a low-level tool; don’t blame it for how it’s being promoted… but once you figure out it’s nothing more than TCAM (ACL+PBR) programming tool, you’ll quickly find a few interesting use cases. If only we’d have hardware we could use to implement them; most vendors gave up years ago.

read more see 2 comments

Does Centralized Control Plane Make Sense?

A friend of mine sent me a challenging question:

You've stated a couple of times that you don't favor the OpenFlow version of SDN due to a variety of problems like scaling and latency. What model/mechanism do you like? Hybrid? Something else?

Before answering the question, let’s step back and ask another one: “Does centralized control plane, as evangelized by ONF, make sense?

read more see 6 comments

It Doesn’t Make Sense to Virtualize 80% of the Servers

A networking engineer was trying to persuade me of importance of hardware VXLAN VTEPs. We quickly agreed physical-to-virtual gateways are the primary use case, and he tried to illustrate his point by saying “Imagine you have 1000 servers in your data center and you manage to virtualize 80% of them. How will you connect them to the other 200?” to which I replied, “That doesn’t make any sense.” Here’s why.

read more see 13 comments

SDN, OpenFlow, NFV and SDDC: Hype and Reality (2-day Workshop)

There are tons of SDN workshops, academies, and webinars out there, many of them praising the almost-magic properties of the new technologies, or the shininess of vendors’ new gadgets and strategic alliances. Not surprisingly, the dirty details of real-life deployments aren’t their main focus.

As you might expect, my 2-day workshop isn’t one of them.

read more add comment
Sidebar