Facebook Backpack Behind the Scenes

When Facebook announced 6-pack (their first chassis switch) my reaction was “meh” (as well as “I would love to hear what Brad Hedlund has to say about it”). When Facebook announced Backpack I mostly ignored the announcement. After all, when one of the cloud-scale unicorns starts talking about their infrastructure, what they tell you is usually low on detail and used primarily as talent attracting tool.

When a pundit started calling Facebook “more genuinely innovating than networking vendors” it was time to check what Wedge and Backpack are all about.

Hardware

Like most chassis switches today Wedge and Backpack have linecard modules and fabric modules connected in a leaf-and-spine topology. Like most other data center switches today they use merchant silicon.

To make their life simpler, Facebook repackaged their fixed switches into linecards and fabric modules of chassis switches. In their own words: “The Backpack chassis is equivalent to a set of 12 Wedge 100 switches connected together.

Yet again, good engineering but nothing revolutionary.

The only surprise: Wedge and Backpack don’t have supervisor modules.

Control and management plane

How do you build a chassis switch that has no supervisor module? Where do you run the control- and management plane software?

There are two obvious answers: you run an election process and run control- and management plane on the linecard that was elected the master (see also: stackable switches) or you run the control- and management plane on all linecards and fabric modules.

That’s what Facebook decided to do: each linecard and fabric module is an independent control- and management plane (see their blog post for more details). They use IBGP within the switch to exchange prefixes, and use fabric modules as BGP route reflectors. Does this sound like ACI-in-a-box or QFabric-in-a-box? It does to me.

However, based on my understanding Facebook didn’t implement a single management plane (like QFabric director or APIC controller). It turns out they don’t have to – they know how to manage nodes at scale, so it doesn’t matter whether they have to manage N switches or 12 x N.

Cumulus Networks used the same approach when porting Cumulus Linux to Backpack. For more details, watch the webinar with JR Rivers.

Does that work for anyone else? It depends – you did automate your data center fabric provisioning, right? Well, if you didn’t you might want to attend the Building Network Automation Solutions course ;)

Why bother?

So why did Facebook decide to build a chassis switch? It’s the wiring mess.

Of course you can build the same switching fabric with pizza box switches and loads of cables, but it’s cleaner to have the leaf-and-spine fabric with a fixed wiring plan, and the internal connections within the chassis are cheaper than the optics you have to buy if you want to build your own leaf-and-spine fabric (regardless of how cheap you get the transceivers).

Innovative?

What Facebook did is definitely good engineering. The only reason they could do it is because they’re able to optimize the total cost of operations. Could any traditional networking vendor tell the customer “we can make you a more reliable and cheaper switch but you’ll have to manage it as 12 switches”? They could try, but would probably be told to get out because "we want to manage a single box that must have two supervisors to make it redundant."

After all, the only reason we have humongous routers like CRS-3 is because service providers wanted to buy a few big boxes instead of building their own infrastructure with smaller building block.

Does that make Facebook Backpack innovative? Depends on how you define innovation.

Back to real life…

You probably won’t use Facebook Backpack in your data center any time soon, but you might have to build a leaf-and-spine fabric and decide whether to use fixed to chassis switches to do that. Dozens of networking engineers and architects found the Building Next-Generation Data Center online course to be the perfect place to discuss these challenges (and many others). Become one of them – register for the next session now.

2 comments:

  1. This is why I love networking "visioners". One day the tell you how we will all program every single flow from magic controller. Next day they tell you to run BGP with EVPN on every pizza box.

    By the way, what is "open" about it? Did they make Broadcom SDK public?
    Replies
    1. The design has been contributed to OCP, there's variety of ASIC vendors as well as ODMs the switches are available from.
      BCM SDK as the name suggests is the property of BCM and can't be made public by a 3rd party (that's why we all have signed NDA's)
Add comment
Sidebar