VMware announced several vMotion enhancements in vSphere 6, ranging from “finally” to “interesting”.
vMotion across virtual switches. Finally. The tricks you had to use previous were absolutely bizarre.
vMotion across routed networks. Finally someone learned how to spell routing. What really bothers me about this one is that vMotion across routed networks worked forever (probably relying on proxy ARP), it just wasn't supported. I was always wondering what the real reason for the lack of support was – maybe they had to implement VRF-like functionality to ensure vMotion traffic uses a different routing table than iSCSI or NFS traffic.
vMotion across vCenter servers. This one is a clear illustration of how stupid the long-distance vMotion ideas were. If you wanted to do vMotion across multiple data centers, they had to use a single vCenter, making them a single management-plane failure domain (not to mention the minor challenge of losing control of all but one data center if the DCI link fails).
Long-distance vMotion, which now tolerates 100 msec RTT. As expected, it took approximately 10 femtoseconds before a VMware EVP started promoting vMotion between East- and West Coast (details somewhere in the middle of this blog post).
Note to VMware: just because you fixed your TCP stack (which is good), long-distance vMotion makes absolutely no more sense than it did before… not that I would ever expect some people promoting it to understand the nuances of why that’s so.