Is Data Center Trilogy Package the Right Fit to Understand Long Distance vMotion Challenges?

A reader sent me this question:

My company will have 10GE dark fiber across our DCs with possibly OTV as the DCI. The VM team has also expressed interest in DC-to-DC vMotion (<4ms). Based on your blogs it looks like overall you don't recommend long-distance vMotion across DCI. Will the "Data Center trilogy" package be the right fit to help me better understand why?

Unfortunately, long-distance vMotion seems to be a persistent craze that peaks with a predicable period of approximately 12 months, and while it seems nothing can inoculate your peers against it, having technical arguments might help.

Considering the webinars in the Data Center Trilogy package, the DCI webinar will be the most helpful, and the other two will give you plenty of background knowledge, which will also come handy when talking with the virtualization engineers.

Speaking of bundles, you might find the whole Data Center Roadmap bundle more interesting – it also includes the Data Center Fabrics and Clos (Leaf-and-Spine) Fabrics webinars.

Continuing with the questions from the same email:

Does the "Load Balancing and Scale-Out Application Architectures" webinar come with the DC trilogy package?

Yes, it’s part of the Data Center 3.0 webinar.

Lastly, I'm thinking even if latency and bandwidth were a non-issue with 10GE DCI, we'd have other DC-to-DC vMotion issues (e.g. asymmetrical routing, traffic trombone, firewall and load balancer state issues, storage/LUN implications, etc), correct?

Absolutely, and then there’s the danger of split subnets. Unfortunately, as we cannot ignore bandwidth and latency, we have to deal with data gravity challenges, and the overall impossibility of moving huge amounts of RAM VMs use (even in a small-to-medium data center) across a WAN link.

Oh, and finally, don’t forget that DC-to-DC vMotion might not have any real underlying business need, but appears as a requirement simply because other people want to make their lives easier, while forcing you to make the network less stable.

More information

3 comments:

  1. I had similar issue with vm/it team 3 years ago. Talking and frightening them, finally helped. Unfortunately we (net) have to give way fields and go for some compromises. We agreed for L2 on vm network but backed infrastructure is pure L3. vm/it know and understand the risk.
  2. Same here, similar issue 3 years ago.... After reading a lot about it we managed to avoid setting up L2 DCI: we had so many arguments against L2 DCI that VM team finally agreed to setup L3 DCI. What helped us? Reading Trilogy webinar and even reading Oracle and IBM recommendations (Oracle recommended us to setup Data Guard to maximize availability and data protection, and IBM recommends that WAS cells do not span DCs)... Now we have 2 x dark fiber which we use to replicate Storage Arrays and 2 x dark fibere which we use to route traffic between DC, and we are quite happy this way. Not sure about VM team, but network is definitely more stable and outages have not impacted that much in critical services deployed in both DCs.
  3. I view Long Distance vMotion as a technology for facilitating migration of an individual non-redundant enterprise datacenter by minimizing downtime of each individual workload.

    Understanding that; until the migration is completed, an outage at EITHER datacenter or the connection between them is a potential risk and would cause serious problems. Combine a simultaneous vMotion and storage vMotion, and the workload moves.

    As soon as everything is moved, shutdown the old datacenter.


Add comment
Sidebar