Can Enterprise Workloads Run on Bare-Metal Servers?
One of my readers left a comment on my “optimize your data center by virtualizing the servers” blog post saying (approximately):
Seems like LinkedIn did it without virtualization :) Can enterprises achieve this to some extent?
Assuming you want to replace physical servers with one or two CPU cores and 4GB of memory with modern servers having dozens of cores and hundreds of GB of memory the short answer is: not for a long time.
In-memory databases, well-written 64-bit database software, and some big-data applications are obvious exceptions.
Most of the software running on modern servers has been designed (and sometimes heavily optimized) for architectures that had totally different performance bottlenecks. Two examples:
- Linux TCP stack is really bad at packet forwarding;
- Apache web server cannot support more than a few thousand connections.
In other cases, it’s nearly impossible (for technical or political reasons) to run multiple software packages on the same system. Have you ever tried to combine applications that rely on different versions of Java on one machine?
What are the solutions?
- Slice the physical hardware into multiple instances of hardware with some reasonable amount of memory (4GB anyone) and just enough CPU capacity to run the workload (aka Server virtualization);
- Slice the operating system into smaller independent units that don’t share administrative rights or libraries (aka containers);
- Deploy an optimized scalable platform that runs well-behaved applications with no dependencies outside of the platform (aka Platform-as-a-Service).
In most enterprise environments I’ve seen so far the server virtualization is the only answer for existing workloads.
Interested in challenges like this one? We’ll discuss them in the Building Next-Generation Data Center online course.
2 comments: