Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Building network automation solutions

6 week online course

reserve a seat
back to overview

Can Enterprise Workloads Run on Bare-Metal Servers?

One of my readers left a comment on my “optimize your data center by virtualizing the serversblog post saying (approximately):

Seems like LinkedIn did it without virtualization :) Can enterprises achieve this to some extent?

Assuming you want to replace physical servers with one or two CPU cores and 4GB of memory with modern servers having dozens of cores and hundreds of GB of memory the short answer is: not for a long time.

In-memory databases, well-written 64-bit database software, and some big-data applications are obvious exceptions.

Most of the software running on modern servers has been designed (and sometimes heavily optimized) for architectures that had totally different performance bottlenecks. Two examples:

In other cases, it’s nearly impossible (for technical or political reasons) to run multiple software packages on the same system. Have you ever tried to combine applications that rely on different versions of Java on one machine?

What are the solutions?

  • Slice the physical hardware into multiple instances of hardware with some reasonable amount of memory (4GB anyone) and just enough CPU capacity to run the workload (aka Server virtualization);
  • Slice the operating system into smaller independent units that don’t share administrative rights or libraries (aka containers);
  • Deploy an optimized scalable platform that runs well-behaved applications with no dependencies outside of the platform (aka Platform-as-a-Service).

In most enterprise environments I’ve seen so far the server virtualization is the only answer for existing workloads.

Interested in challenges like this one? We’ll discuss them in the Building Next-Generation Data Center online course.


  1. It seems the pendulum in the last 10 years is swinging back to the giant centralized systems. There are many enterprises that used to, and still do, minimize the risk of a physical server failure by spreading workloads across dozens of small servers (4 cores & 8GB ram). With low end brand name servers starting a 8 cores and 64GB ram and customers unwilling to take the "risk" of putting more than 2 virtual machines on a hypervisor, I've contemplated recommending these customers start using mini-PCs like Intel NUCs. I think there is a big section of the market that doesn't need or want the scale.

    1. Need to write a blog post about this. You're not minimizing the risk by using more servers, you're just reducing the blast radius (or the failure domain) while losing the statistical multiplexing benefits of virtualization.


You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.