It doesn’t make sense to build a new data center network to support legacy bare-metal server infrastructure. You’ll have to use relatively expensive 1G/10G ports to be able to connect the current and future servers, and once the server and virtualization engineers wake up and do hardware refresh you’ll end up with way too many ports (oh, and you do know that transceivers could cost more than the switching hardware, right?).
In the ideal case, you’d build a new infrastructure with high-density servers, 100% virtualized workload… and then all you’d need would be two 1RU or 2RU ToR switches. Unfortunately most organizations can’t find their path from here to there due to tons of internal red tape (aka budgets and depreciation period).
Eric Hanselman (then at 451 Research) provided an interesting way out of this Catch-22 during one of the Interop panels:
- Start a disaster recovery project;
- Rent space at a colocation facility (hat tip to Rick Parker);
- Build your disaster recovery infrastructure over there;
- Move the workload, declare success, and shut down the legacy infrastructure;
- Start another disaster recovery project ;)
Obviously you’d want the new infrastructure to be as forward-looking as your organization feels comfortable with. High-density servers (each of them hosting 50 – 100 VMs) are a no-brainer, virtualized network services appliances are already a harder sell because they might require changes in processes and responsibilities if you want to do them right, and distributed file system (like Nutanix or VSAN) might turn out to be mission impossible, because, you know, storage.
Design aspects of modern cloud infrastructure are covered in my Designing Private Cloud Infrastructure webinar and Data Center Design Case Studies book (included with the webinar); other Cloud Infrastructure and SDDC webinars give you the technology details you’ll need to understand the design tradeoffs.