All You Need Are Two Top-of-Rack Switches

Every time I’m running a classroom version of my Designing the Cloud Infrastructure workshop, I start with a simple question: “Who has more than 2000 VMs or bare-metal servers in the data center?

I might see three hands on a good day; 90-95% of the audience have smaller data centers… and some of them get disappointed when I tell them they don’t need more than two ToR switches in their data center.

I wrote an almost identical blog post a year ago, and I’m still getting the same questions, so here it is again.

Let’s do some math. It’s easy to pack 50-100 VMs in a single 1RU server these days (more details in the Designing a Private Cloud Infrastructure case study and Data Center Design book) – 2000 VMs can easily fit onto 40 servers, for a total of 80 x 10GE ports (two 10GE ports per hypervisor host are still enough for most workloads).

Most 1RU ToR switches have 64 10GE ports (sometimes available in 40GE format; see my Data Center Fabrics webinar for details), and two of them provide all the ports you need to connect 40 servers, associated storage (or none if you’re already in the distributed local storage world), network appliances or network services cluster, and WAN edge routers or your layer-3 backbone.

For a totally different perspective let’s focus on bandwidth requirements. In the good old days most traffic left data center; these days in some rare cases up to 90% of the traffic stays within the data center.

Let’s assume our data center has two 10GE uplinks, and 90% of the traffic stays within the data center. The total amount of traffic generated in the data center is thus 200 Gbps. Let’s add another 200 Gbps for storage traffic, and multiple the sum by two to cater to the whims of marketing math – 800 Gbps is more than enough bandwidth.

Most 1RU switches provide 1.28 Tbps of non-blocking marketing bandwidth these days, and we need at least two of them for redundancy. Yet again, we don’t need more than two switches.

Your requirements might be totally different. You may run a Hadoop cluster, in which case this calculation makes no sense, or you might be a large service provider planning to deploy a million VMs, in which case you shouldn’t base your designs on napkin calculations and blog posts, but build in-house knowledge or get a network architect to help you (you do know I’m available for short engagements, don’t you?).

Finally, you might be in that awkward spot where you need 70 or 80 10GE ports per switch. Buying two 2RU switches (with 96 to 128 10GE ports each) is probably easier than building a leaf-and-spine fabric.


  1. "95-95% of the audience have smaller data centers… "

    Assuming you meant 90-95%?

    Thanks for the helpful posts, easy to pass along such sanity checks to the systems folks ;-)
    1. Absolutely. Fixed. Thank you!
  2. Amen.

    It seems a lot of people in the network industry forget how dense an optimized virtual platform can be these days.

    Especially when you start to talk hyperconverged platforms (nutanix, evo:rail, simplicity etc) with integrated flash, density increases again.

    Average company rack counts should be remaining static, or shrinking... If they optimize.

    Sadly, I've seen a lot of orgs hang on to legacy and not consolidate anywhere near as much as is possible...

    So as always, it depends a lot on the people involved and how empowered they to a make positive changes.
  3. The same view here. Absolutely agree.
  4. When you add network trafic + host vMotion + storage vMotion + usual storage trafic, it feels a bit dangerous to put 50 to 100 VMs per host on a 10 Gb link.

    For very high density, I feel it's easier to keep a separated storage network, and not have to deal with QoS, CEE & co. But having 4 ToR per rack costs also a bit.
  5. This is even more true in the Cisco world where your UCS FI's are acting as your server farm's access layer. And, if something insane happens and you really do need more physical ports, FEXes can attach to your TOR switches to add more ports with minimal expense or configuration work.
  6. Does it also mean as well that all we need is Vlan segmentation and not vxlan ? Do in such case 4096 Vlans enough ?
    1. Obviously it depends on your particular use case, but I have yet to see an enterprise environment with 4K VLANs, and in the IaaS provider environment you'd use overlay encapsulation on the attached servers (so you'd only need a single transport VLAN).
  7. All very well if your workload is compatible with running of the order of 50 VM per server... and the bulk of systems is okay with being VMed. Perhaps that means I'd be one of the few who raised their hands!
Add comment