Obviously the page you're referring to is a quick-and-dirty benchmark. If you wanted the optimal numbers, you would have to tune quite a few parameters just like for hardware benchmarks (sysctl kernel parameters, Jumbo frames, ...).
While he’s absolutely right, this is not the performance data a typical user should be looking for.
We all know that the performance numbers quoted by vendors often represent finely crafted marketing gimmicks – the equipment under test did deliver the numbers appearing in the data sheets, but only under carefully engineered load and plenty of tweaking. How relevant is that to your environment? Can you get anywhere close to that performance?
Huge organizations with billions invested in hardware obviously use large teams to squeeze the most out of the boxes they use (after all, achieving a 1% improvement on a 1B$ investment results in a cool 10M$, which can pay for that team for quite a while), but what about the rest of us?
I’m not really interested in the marketing performance numbers. I want to know what I can expect to get with out-of-the-box installs (and potentially a few well-documented improvements like using STT with VMware NSX to use TCP offload capabilities on older NICs).
In the case of virtual switch performance that triggered Simonp’s comment: I got 10 Gbps out of a Linux VM running a default install of CentOS on top of vSphere 4.x when using 10GE uplinks on a Cisco UCS server, and 17 Gbps between two VMs (yet again, using default Linux installs) running on the same vSphere host.
If someone managed to get only 1.4 Gbps out of a Linux box running on top of OpenStack, there’s obviously still room for improvement – I would love to see a virtual switch delivering linerate speed at reasonable CPU utilization with out-of-the-box settings (hint: this is the point where vendor engineers could tell me how their particular Neutron implementation works infinitely better than default OpenStack distribution ;).