Pete Welcher wrote an excellent Data Center L2 Interconnect and Failover article with a great analogy: he compares layer-2 data center interconnect to beer (one might be a good thing, but it rarely stops there). He also raised an extremely good point: while it makes sense to promote load balancers and scale-out architectures, many existing applications will never run on more than a single server (sometimes using embedded database like SQL Express).
He’s right ... but then you have to ask your CIO what exactly makes something “mission critical” and why it would be necessary to implement L2 DCI and risk the stability of a VLAN (or even a whole data center, depending on how bad your design is) just to increase the uptime of such a brittle kludge by transporting it between data centers intact (while it’s quite probable that it will implode on its own without being touched or moved). It might make more sense to be pragmatic, acknowledge that some applications will never be highly reliable and live with the consequences.
You can read a bit more about this topic in the “Long distance vMotion = traffic trombone, so why go there?” article I wrote for SearchNetworking; numerous details are also covered in the Data Center Interconnect webinar.