You haven’t mentioned Intel's Omni-Path at all. Should I be surprised?
While Omni-Path looks like a cool technology (at least at the whitepaper level), nobody ever mentioned it (or Intel) in any data center switching discussion I was involved in.
Intel’s solution never came up in my consulting engagements, and it’s not even mentioned in the 2018 Gartner magic quadrant (which means it doesn’t exist in their customer base).
Also, I keep wondering why nobody is using Intel silicon. Arista did something years ago with FM6000, but that was the only time I’ve ever seen Intel ASIC used in a data center switch.
The only time I heard a similar idea was years ago when Intel was talking about switching silicon in NICs (HT: Jon Hudson during an Interop party). At that time, the architecture they promoted was a hypercube built from servers with switching NICs.
While that idea might make sense for very particular workloads (= Finite Elements Method) it’s basically NUMA writ large… and it looks like Intel abandoned that idea in favor of a more traditional approach.
It seems Omni-Path is heavily used in High-Performance Computing (HPC) environments as an Infiniband replacement. No surprise there, Intel always had very-low-latency chipset (that was the reason Arista used FM6000), and combined with all other features they claim they implemented in their Fabric Manager (think proprietary SDN controller that actually works) that would make perfect sense.
However, it looks like even High-Frequency Trading (HFT) doesn’t need that kind of speed. Arista was traditionally very strong in HFT, but after launching one product each Cisco and Arista effectively stopped competing on very-low-latency switches… or maybe the mainstream merchant silicon became fast enough.
Are you seeing something different? Is anyone using Omni-Path outside of HPC world?