VEPA or vCloud Network Isolation?

If I could design my dream data center with total disregard to today’s limitations (and technologies from an alternate universe), it would have optimal connectivity between any two endpoints (real or virtual), no limits on VM mobility and on-demand L4-7 services insertion (be it firewalling, load balancing or something else) ... all of that implemented on truly scalable trombone-free networking infrastructure (in a dream world I don’t care whether it’s called routing or bridging).

Every single networking and virtualization vendor is claiming to have the keys to this nirvana ... if only we would buy their products. Most of the claims turn out to be pure marketing, aimed solely at getting as much of our budgets as possible. I already wrote about vCloud Director Network Isolation; it turns out Edge Virtual Bridging (EVB; 802.1Qbg) and VEPA are not much better (although HP promises heavens-on-earth once they get it implemented). Instead of focusing on what we really need to build scalable data centers, networking and virtualization vendors prefer to fight over distribution of our budgets.

Read more about vCDNI and VEPA in the article I wrote for SearchNetworking.Com.

9 comments:

  1. Hi Ivan,

    You may have answered the question I'm about to ask somewhere else, but I certainly have missed your answer, so here comes: could you describe the context of your posts in relation to the Data Centre networking, i.e. what kind of a "cloudy" arrangement would supposedly live in the DCs are you talking about?

    Some examples of such "cloudy" arrangements might be: a) An enterprise who rents space or owns a DC and runs their own stuff there (and runs private lines to/between DC(s)); b) an XaaS Service Provider, who provides "public" cloudy services (accessed via the Internet); or c) an XaaS Service Provider, who provides "private" cloudy services (accessed via private lines)?

    The reason I'm asking is that there are differences, sometimes significant, in what might and what might not a problem (or how severe it is), depending on the scenario. For example, for the case (a) a limit of 4094 VLANs is not likely to be a limitation (which is a force at play when you're using Nexus, AFAIK), but for an SP with hundreds or thousands of customers it would certainly be.
  2. More to the above - depending on the scenario, the oft-mentioned "tromboning" problem may not be a problem at all - if storage is replicated (sync or via storage vMotion), and a VM after the move is talking to storage at the new location, then, assuming we're talking typical enterprise applications (CRM, ERP) the volumes of data exchanged between DB server and application clients usually are quite minuscule, so tromboning might not be such a big issue?
  3. The tools to build scalable data centers are already out there with true distributed processing frameworks like Hadoop, Cassandra, MapReduce, etc. Google, Facebook and Amazon have done this successfully and have scaled their systems without extended vlans all over the globe. Granted, they choose to hire the best and the brightest engineers but shouldn't any large organization attempting such a lofty goal as the almighty "Cloud" do the same?

    Most organizations are going about things backwards (probably because consolidation was marketed before "the cloud"). If you implement a true distributed computing platform, you improve server consolidation. VMware solves a consolidation problem, not a distributed computing problem. If organizations want a true cloud, the answer is to invest in engineering talent, not an off the shelf product.
  4. It depends on how well you design your network and what the traffic flows are. Inbound traffic will always trombone (unless you use interesting tricks with Route Health Injection that brings you close to L3 solution), database traffic will also trombone, outbound traffic might go through the nearest exit point and out into the WAN, but then you can't use any stateful device in the path (Roland Dobbins would undoubtedly say "I told you firewalls stink")
  5. Couldn't agree more ... however, there's the reality of legacy applications that will never be coded properly and a lot of clueless people calling themselves programmers rolling out code based on non-scalable architectures.
  6. I agree that there will be tromboning. I am just saying that if these tromboned flows are small, then it simply does not matter - let them trombone, as the end-user experience (the only thing that matters, in the end) will not be affected perceivably.
  7. Have you heard of the Open vSwitch? They basically establish tunnels to the edge, so the physical network just does packet forwarding and all service provisioning occurs in the logical network. It sounded a little nuts to me at first, but the more I read and think about it, the more compelling it sounds.

    http://openvswitch.org/
  8. You can use LISP to optimize North-South traffic. What is exactly meant by tromboning?
  9. Thanks for the link - interesting product!
Add comment
Sidebar