We did a number of Software Gone Wild podcasts trying to figure out whether smart NICs address a real need or whether it’s just another vendor attempt to explore all potential markets. As expected, we got opposing views from Luke Gorrie claiming a NIC should be as simple as possible to Silvano Gai explaining how dedicated hardware performs the same operations at lower cost, lower power consumption and way higher speeds.
In theory, there’s no doubt that Silvano is right. Just look at how expensive some router line cards are, and try to figure out how much it would cost to get 25.6 Tbps of forwarding performance that we’ll get in a single ASIC (Tomahawk-4) in software (assuming ~10 Gbps per CPU core). High-speed core packet forwarding has to be done in dedicated hardware.
Back to the network edge. In practice, one has to balance the tradeoffs of increased software complexity caused by smart NICs against the cost of the CPU cores needed for software-based packet forwarding. While software developers yearn for simplicity, NIC vendors would love you to believe you cannot reach the performance you need with software-based packet processing. Even worse, there are still people justifying smart NICs with ancient performance myths. Here’s a sample LinkedIn comment I got in June 2020:
I think you are forgetting one if the major reasons for the rise of smart NICs; that being that ability to process high speed networking packet streams at line rates and to perform operations on the packet streams. You old x86 processor with an average PCIe 4-lane dumb NIC card is not to the task up for 25 Gbps networks or higher.
How about a few facts:
- x86 server architecture hasn’t been a limiting factor for ages. Luke Gorrie demonstrated how to push 200 Gbps from an off-the-shelf x86 server in 2013, managed to do that with two CPU cores and 64-byte packets in 2016, and explained his ideas in details in the very first episode of Software Gone Wild.
- In the meantime we did several Software Gone Wild episodes with people pushing the performance envelope of software-based forwarding, including IPv4-over-IPv6 tunnel headend delivering 20 Gbps per x86 core… in March 2016.
I stopped tracking how far they got in the meantime, it was pretty obvious that we need hardware switching in NICs argument was already bogus at that point, and if you want slightly more recent performance figures, check out the fd.io VPP performance tests and Andree Toonk’s blog posts on high-performance packet forwarding with VPP and XDP.
TL&DR: Just because you can’t figure out how to do it doesn’t mean it can’t be done. Do some more research…
So where do we really need smart NICs? There are (large) niche use cases like support for baremetal servers in public clouds or preprocessing stock quotes in High Frequency Trading (HFT) environments. NetApp is also using Pensando NIC as a generic offload engine, but in that case it just happens that the offload hardware also has an Ethernet port (for a minimal amount of additional details please check the Pensando presentations from CFD7). Anything else from real-life production as opposed to conference-talk-generating proof-of-concept? Please let me know!