This article was initially sent to my SDN mailing list. To register for SDN tips, updates, and special offers, click here.
Usman asked a few questions in his comment on my blog, including:
At the moment, local RIB gets downloaded to FIB and we get packet forwarding on a router. If we start evaluating too many fields (PBR) and (assume) are able to push these policies to the FIB - what would become of the FIB table size?
Short answer: It would explode ;)
I probably published at least a few blog posts on this very topic in the past, but the same questions keep coming back for whatever reason. If you’re interested in details of OpenFlow, I'd strongly recommend watching ONF-agnostic OpenFlow Deep Dive, and if you cannot afford the $99.99, you might find the blog posts in nicely curated digital book format useful.
Very few switches have a FIB that would be able to match more than destination MAC or destination IP address. Matching on additional fields, particularly matching with wildcards, is usually done in separate (more expensive) hardware structure usually called TCAM.
Sliding down the rabbit hole
The above statement, while mostly correct, is an oversimplification. You don’t need TCAM if you need a match on a fixed set of bits in the packet. Such a match is easily done with hash-based lookups (think of MAC address matching with bits collected from random parts of the packet instead of beginning of frame). Obviously you need a hardware forwarding pipeline that gives you the flexibility to collect bits from random places – and Arista obviously found one, as they managed to use the MAC address table to match /24 IPv4 prefixes and /48 IPv6 prefixes that David Barroso needed for his SDN Internet Router.
Hardware vendors are also getting more creative over time: Broadcom’s Tomahawk can (supposedly) use the Unified Forwarding Table (UFT) for N-tuple matching, and Juniper used some crazy math called Bloom Filters in their Q5 chip to get TCAM-like behavior with traditional RAM.
Back to OpenFlow
Whenever talking about OpenFlow, keep in mind that deploying new software (OpenFlow agent) on the switch doesn’t change the forwarding hardware. You might get more functionality if the developer of the new software actually read the hardware specs (instead of vendor SDK) and decided to squeeze the last drops of forwarding magic from the chipset, or less functionality if the $vendor decided not to support full OpenFlow functionality. You would be amazed at how many useful features are missing from commercial OpenFlow implementations.
Now let’s answer a few common questions:
Can we store any OpenFlow flow in FIB? No, unless the flow matches on destination MAC address or destination IP prefix.
Where do the other flows go? In TCAM, or whatever the memory used to store PBR/ACL entries is called.
Isn’t that memory pretty limited? Yep. Most terabit switches have a few thousand entries (or less).
How will we get flow-based forwarding with those limitations? My point exactly…
Won’t the situation improve in the future? Sure. You can always throw more hardware at the problem, and the hardware always gets cheaper. But guess what – simpler hardware gets even cheaper ;) There is no free lunch, regardless of what the flow-based zealots are telling you.
Can it get any worse?
Sure. Some OpenFlow implementations really suck – they store every single OpenFlow flow in TCAM, even if it matches only the destination MAC address.
Smarter implementations try to hide the hardware complexity and pretend TCAM, L2 table and L3 table form a single table. Sounds nice, until you realize it’s impossible to figure out whether the next flow you want to install will fit into the table.
Finally, there are at least two implementations that expose multiple forwarding tables as separate OpenFlow tables.