Building network automation solutions

9 module online course

Start now!

Category: vMotion

Live VM Mobility Network Requirements

Every time someone mentions how awesome new technologies solve live VM mobility across WAN networks, I start muttering unmentionables. Live VM mobility across disjoint layer-2 subnets works great in demos, but usually fails in real life due to stale ARP caches. The only way to solve this problem for good is to implement EC2-like layer-3 forwarding in hypervisor soft switches.

Update: LISP Host Mobility seems to be a potential exception; see the comment from Nico.

For more details, watch the VM Mobility Requirements video (part of Enterasys-sponsored DCI webinar), read the Hot and Cold VM Mobility blog post or watch the recording of NFD4 session with Cisco’s Victor Moreno.

see 3 comments

Dirty Details of Inter-DC VM- and IP Address Mobility

The upcoming Data Center Interconnect webinar (register) is sponsored by Enterasys Networks, so it’s obvious that I’ll also mention how you can use their technology to solve particular data center interconnect problems, but that’s not all. The webinar will focus primarily on Whys, Hows and Whats of solving VM- and IP address mobility challenges in multi-data center environments.

Here are a few of the topics we’ll cover:

read more add comment

PVLAN, VXLAN and cloud application architectures

Aldrin Isaac made a great comment to my Could MPLS-over-IP replace VXLAN? article:

As far as I understand, VXLAN, NVGRE and any tunneling protocol that use global ID in the data plane cannot support PVLAN functionality.

He’s absolutely right, but you shouldn’t try to shoehorn VXLAN into existing deployment models. To understand why that doesn’t make sense, we have to focus on the typical cloud application architectures.

read more add comment

Virtualized Squashed Complexity Sausage

Straight from RFC 6670 (section 3.4):

[...] as is usually the case with communications technologies, simplification in one element of the system introduces an increase (possibly a non-linear one) in complexity elsewhere. This creates the "squashed sausage" effect, where reduction in complexity at one place leads to significant increase in complexity at a remote location.

This is probably the most concise description of the great idea of using long-distance vMotion for “mission-critical” craplications, and applies equally well to the kludges used to compensate the simplicity of virtual switches.

add comment

Long-Distance Workload Mobility in Perspective

In a recent blog post, Chuck Hollis described how some of EMC customers use long-distance workload mobility. Not surprisingly, he focused on the VPLEX Metro part of the solution and didn’t even mention the earth-flattening requirements this idea imposes on the network. I guess you already know my views on that topic, but regardless of my personal opinions, he got me curious.

read more see 4 comments

Video: Networking requirements for VM mobility

You’re probably sick and tired of me writing and talking about networking requirements for VM mobility (large VLAN segments that some people want to extend across the globe), but just in case you have to show someone a brief summary, here’s a video taken from the Data Center Fabric Architectures webinar.

You’ll also find VM mobility challenges described to various degrees in Introduction to Virtual Networking, VMware Networking Deep Dive and Data Center Interconnects webinars

add comment

Follow-the-sun workload mobility? Get lost!

A week ago I was writing about the latency and bandwidth challenges of long-distance vMotion and why it rarely makes sense to use it in disaster avoidance scenarios (for a real disaster avoidance story, read this post ... and note no vMotion was used to get out of harm’s way). The article I wrote for SearchNetworking tackles an idea that is an order of magnitude more ridiculous: using vMotion to migrate virtual machines around the world to bring them close to the users (follow-the-sun workload mobility). I wonder which islands you’d have to use to cross the Pacific in 10ms RTT hops supported by vMotion?

As described in my Data Center Interconnects webinar (and the Scalability Rules book), the only solution that really works is a scale-out application architecture combined with load balancers.

Read and enjoy the article ...

see 6 comments

Long-distance vMotion for disaster avoidance? Do the math first

The proponents of inter-DC layer-2 connectivity (required by long-distance vMotion) inevitably cite disaster avoidance (along with buzzword-bingo-winning business agility) as one of the primary requirements after they figure out stretched clusters might not be such a good idea (and there’s no way to explain the dangers of split subnets to some people). When faced with the disaster avoidance “requirement”, ask them to do some basic math first.

read more see 14 comments

Large-scale bridging = nuked earth

If you’re not working for a data center fabric vendor (in which case please read the other today’s post), you’ll probably enjoy the excellent analogy Ethan Banks made after reading my TRILL-over-WAN post:

Think of a network topology like a road map. There's boulevards, major junction points, highways, dead ends, etc. Now imagine what that map looks like after it's been nuked from orbit: flat. Sure, we blew up the world, but you can go in a straight line anywhere you want.

... and don’t forget to be nice to the people asking for inter-DC VM mobility ;)

add comment

High availability fallacies

I’ve already written about the stupidities of risking the stability of two data centers to enable live migration of “mission critical” VMs between them. Now let’s take the discussion a step further – after hearing how critical the VM the server or application team wants to migrate is, you might be tempted to ask “and how do you ensure its high availability the rest of the time?” The response will likely be along the lines of “We’re using VMware High Availability” or even prouder “We’re using VMware Fault Tolerance to ensure even a hardware failure can’t bring it down.”

read more see 10 comments

vSphere 5.0 new networking features: disappointing

I was sort of upset that my vacations were making me miss the VMware vSphere 5.0 launch event (on the other hand, being limited to half hour Internet access served with early morning cappuccino is not necessarily a bad thing), but after I managed to get home, I realized I hadn’t really missed much. Let me rephrase that – VMware launched a major release of vSphere and the networking features are barely worth mentioning (or maybe they’ll launch them when the vTax brouhaha subsides).

read more see 12 comments

Automatic edge VLAN provisioning with VM Tracer from Arista

One of the implications of Virtual Machine (VM) mobility (as implemented by VMware’s vMotion or Microsoft’s Live Migration) is the need to have the same VLAN configured on the access ports connected to the source and the target hypervisor hosts. EVB (802.1Qbg) provides a perfect solution, but it’s questionable when it will leave the dreamland domain. In the meantime, most environments have to deploy stretched VLANs ... or you might be able to use hypervisor-aware features of your edge switches, for example VM Tracer implemented in Arista EOS.

read more see 3 comments

Edge Virtual Bridging (EVB; 802.1Qbg) eases VLAN configuration pains

Update 2011-05-05 16:50UTC: Added VN-Link/802.1Qbh

Challenge: If you want to deploy virtual machines belonging to different security zones within the same physical host, you have to isolate them. VLANs are the most common approach. If you want to migrate a running VM from one host to another while preserving its user sessions, you usually have to rely on bridging. The set of VLANs needed on a trunk link between the hypervisor host and access switch is thus unpredictable (more information in my VMware Networking Deep Dive webinar)

Solution#1 (painful): Configure all possible VLANs on the trunk link. Stretched VLANs spanning the whole data center are an ideal ingredient of a major meltdown.

read more see 17 comments
Sidebar