Open vSwitch Performance Revisited

A while ago I wrote about performance bottlenecks of Open vSwitch. In the meantime, the OVS team drastically improved OVS performance resulting in something that Andy Hill called Ludicrous Speed at the latest OpenStack summit (slide deck, video).

Let’s look at how impressive the performance improvements are.

The numbers quoted in the presentation were 72K flows (with the new default being 200K flows) and 260K pps.

200K flows is definitely more than enough to implement MAC/IP forwarding for 50 VMs (after all, that’s 4000 flows per VM), and probably still just fine even if you start doing reflexive ACLs with OVS (that’s how NSX MH implements pretty-stateful packet filters).

What I’m assuming these days is 50:1 VM packing ratio (and you can expect 200:1 or more for Docker containers) on a reasonably recent server with 500GB of RAM, a dozen of cores and two 10GE uplinks. YMMV.

On the other hand, 260K pps is just over a gigabit per second assuming an average packet size of 500 bytes (IMIX average is 340 bytes) or around 3 Gbps with 1500-byte packets.

To put this number in perspective: Palo Alto virtual firewall can do ~1 Gbps (while doing slightly more than packet forwarding, so it burns four vCPUs), and the venerable ancient vShield Edge 1.0 managed to get 3 Gbps of firewalled traffic through userland VM while burning a single core.

The blog post on Network Heresy indicates OVS can do much more than what the presentation mentions (after all, those numbers are from a production deployment and thus represent the characteristics of actual compute infrastructure and workload), but considering that the typical server I mentioned would have at least 2 10GE uplinks (which would result in 40 Gbps of marketing bandwidth), the 1-3 Gbps throughput looks awfully low – maybe it's just that the production workloads described in the presentation don't need more than that, in which case we might not have a problem at all.

Another data point

I found another data point while researching the performance changes in recent OVS releases: an OpenStack Wiki article lists ipref speed between two Linux hosts running on different hypervisors using OVS @ ~1.4 Gbps. I was able to get 10 Gbps out of ipref running on Linux hosts on top of vSphere 4.x (on UCS blades) years ago. Honestly, I'm a bit confused.

Have I missed anything? Please share your opinions in the comments.

8 comments:

  1. Hi Ivan,
    I've done some testing on Neutron + OVS + GRE tunnels in the past and I seem to recall that I got higher numbers than 1.4 Gbps. That being said, I don't think I managed to get to 10Gbps either.
    Obviously the page you're referring to is a quick-and-dirty benchmark. If you wanted the optimal numbers, you would have to tune quite a few parameters just like for hardware benchmarks (sysctl kernel parameters, Jumbo frames, ...). Using VLAN mode, I had no issue to get to 10Gbps. I presume the same is true if you use VXLAN with NICs that can do hardware offload. Check this report on the Mellanox site => http://community.mellanox.com/thread/1692
    Anyway, thanks for the blog, I'm always happy to read your content :)
  2. http://openvswitch.org/support/ovscon2014/18/1600-ovs_perf.pptx slide 8 says 6.7 Gbps with VXLAN. As simonp says, with no encapsulation you get line rate.
  3. The slides mentioned earlier (1600-ovs_perf.pptx) shows dpdk-ovs doing almost wirespeed (9.9 Gbps / 14.85 Mpps) with 64B packets.

    Is it possible they (Ludicrous Speed guys) meant: 72K flows, each at 260Kpps => 72k flows * 260kpps = 18.72 Mpps = 12.579 Gbps, which sounds reasonable for the 2 x 10GbE NICs server you've mentioned, without using dpdk's kernel bypass / zero copy / poll mode drivers / etc.

    How about we just ask them what they meant ;-)

    Replies
    1. You lost three zeroes in that calculation (or I can't type anymore, which wouldn't exactly surprise me ;).
    2. oops. re-calc: 18.72 *Gpps*, devided by 1,488,096 (the pps of 1 Gbps Eth) => 12.579 Gbps, which sounds reasonable ...

      The result remains the same, and that what counts ;-) (funny, my professors never bought this excuse either)

      And now, I shell prove I'm not a robot...
    3. You're still missing your zeroes (or I'm still asleep). That would be 12 Tbps, which is still a bit too much for an x86 server.
    4. you're absolutely right. sorry for wasting your time. you may want to delete my useless comments. thanks for being so polite.
  4. Any followups on the OVS performance?
Add comment
Sidebar