Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Start now!
back to overview

# Let’s Focus on Realistic Design Scenarios

An engineer working for a large system integrator sent me this question:

Since you are running a detailed series on leaf-and-spine fabrics these days, could you please suggest if following design scenarios of Facebook and Linkedin Data centers are also covered?

We did cover multistage leaf-and-spine fabrics in the overview part of the webinar and Brad Hedlund provided great examples of building fabrics with up to 24.000 ports, but when I started working on the design part of the webinar, I decided to focus on simple (3-stage) leaf-and-spine fabrics.

Using modern modular spine switches and single leaf- and spine layers you can easily get to at least 10.000 10GE (or 25GE) port fabrics, and people who need larger data centers don’t need my webinars. Also, just because your CxO told you to build a data center like Google/Facebook/whoever does doesn’t mean that you need it – sometimes you don’t need more than two switches ;)

#### How did I get 10.000 ports?

Let’s do a few simple calculations assuming that the total number of ports in a leaf-and-spine fabric equals server-ports-per-leaf * number-of-leafs and number of leaf switches equals ports-per-spine-switch.

The proof (and figuring out which minor details we ignored) is left as an exercise for the reader, for more details watch the Leaf-and-Spine Fabrics webinar… and feel free to write a comment pointing them out.

Using 1:2 oversubscription here’s a gigantic fabric you can build with Nexus 9000 switches:

• Leaf switch: Nexus 93108 (48 10GE ports, 6 40 GE uplinks)
• Spine switch: Nexus 9516 (576 40GE ports)
• Total number of server-facing ports: over 27.000

Need something smaller? Use a fixed-configuration spine switch:

• Leaf switch: Nexus 93108 (48 10GE ports, 6 GE uplinks)
• Spine switch: Nexus 9332PQ (32 40GE ports)
• Total number of server-facing ports: over 1.500

You can reach similar numbers with data center switches from Arista, Brocade, Dell, HP or Juniper (in alphabetic order).

So maybe the design scenarios I covered in the Leaf-and-Spine Designs part of the webinar are good enough for your data center. Want to discuss them? Why don’t you join the Building the Next Generation Data Center course?

1. I see that the smaller example uses 6 spines. Wouldn't it be possible to use 24 spines with 40G to 4x10G breakouts (40gbase-lr4 to 4x10gbase-lr)?
Counting for the Trident II limit of 104 interfaces we should reach 4992 10G ports out of 104 leaf and 24 spines?

1. Of course you could do that (and get 4x the number of ports in the ideal case), but the wiring tends to be a mess.