Build the Next-Generation Data Center
6 week online course starting in spring 2017

How Realistic Is High-Density Virtualization?

A while ago I guestimated that most private clouds don’t have more than a few thousand VMs, and that they don’t need more bandwidth than what two ToR switches could provide.

Last autumn Iwan Rahabok published a blog post describing the compute- and storage parts of it, and I had a presentation describing the networking aspects of high-density consolidation. However… whenever I was talking about the high-density virtualization I wondered how realistic that scenario is in a typical enterprise environment, and you know how hard it is to get a reliable set of data points that have more statistical significance than anecdata.

The situation changed dramatically when Frank Denneman from PernixData started publishing the results collected with their PernixData Architect product. So far he published:

If you massage that data right, you get the conclusions I was looking for:

  • Plenty of deployments have enough CPU cores and memory for high-density virtualization;
  • A lot of people have high VM density on two-socket servers, and predictably the percentage increases with the number of cores.

On the other hand, it’s interesting to see that while ~15% of 24-core hosts have more than 100 VMs, almost 20% of those hosts run less than 10 VMs.

On a slightly tangential topic, I ran the presentation I mentioned above as a short webinar for my subscribers, and the resulting videos are already available in the Designing Private Cloud Infrastructure webinar.

2 comments:

  1. As we got denser, we began to push the limits of UCS oversubscription, but mostly due to the fiber-channel traffic. So we had to increase southbound chassis links from 2 cables to 4 cables, and increase the number of uplinks in the SAN port-channels. Other than that, its hard to outrun the 6200 series fabric interconnects.

    ReplyDelete
  2. @silent rider:

    Looks like you're hitting a common limitation of blade architectures. In 2016, I just don’t see the point to blade servers. Unless you’re running a heavy slant of physical to virtual hosts, it just seems like more complexity, lesser scalability / performance, for little gain. Even if you’re using UCS for rackmounts, there’s still little win IMO. It adds a ton of complexity under the guise of simplicity, and it costs an arm and a leg on top of it. Good old fashion rackmounts are just easier, faster, and more cost effective when it comes to virtualization. The only selling point I’ve heard for UCS is “what happens if your server crashes?” which is a good point for physical servers, but not so much for virtual ones. I would say the one exception might be if you're running a 100+ virtual hosts. At that point, the automation capability you get with UCS might start paying off.

    I run the exact setup Ivan has been mentioning for a while now (4 years) and it works great. I have two 5596’s, and all our Vmhosts (32 of them) up link into those switches. We have a number of hosts that have well over 100 VM’s per and they’re not even breaking a sweat.

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.