Does CPU-based forwarding performance matter for SDN?

David Le Goff sent me several great SDN-related questions. Here’s the first one:

What is your take on the performance issue with software-based equipment when dealing with general purpose CPU only? Do you see this challenge as a hard stop to SDN business?

Short answer (as always) is it depends. However, I think most people approach this issue the wrong way.

First, let’s agree that SDN means programmable networks (or more precisely, network elements that can be configured through a reasonable and documented API), not the Open Networking Foundation’s self-serving definition.

Second, I hope we agree it makes no sense to perpetuate the existing spaghetti mess we have in most data centers. It’s time to decouple content and services from the transport, decouple virtual networks from the physical transport, and start building networks that provide equidistant endpoints (in which case it doesn’t matter to which port a load balancer or firewall is connected).

Now, assuming you’ve cleaned up your design, you have switches that do fast packet forwarding and have few needs for additional services, and the services-focused elements (firewalls, caches, load balancers) that work on L4-7. These two sets of network elements have totally different requirements:

  • Implementing fast (and dumb) packet forwarding on L2 (bridge) or L3 (router) on generic x86 hardware makes no sense. It makes perfect sense to implement the control plane on generic x86 hardware (almost all switch vendors use this approach) and generic OS platform, but it definitely doesn’t make sense to let the x86 CPU get involved with packet forwarding. Broadcom's chipset can do a way better job for less money.
  • L4-7 services are usually complex enough to require lots of CPU power anyway. Firewalls configured to perform deep packet inspection and load balancers inspecting HTTP sessions must process the first few packets of every session by the CPU anyway, and only then potentially offload the flow record to dedicated hardware. With optimized networking stacks, it’s possible to get reasonable forwarding performance on well-designed x86 platforms, so there’s little reason to use dedicated hardware in L4-7 appliances today (SSL offload is still a grey area).

On top of everything else, shortsighted design of dedicated hardware used by L4-7 appliances severely limits your options. Just ask a major vendor that needed years to roll out IPv6-enabled load balancers, high-performance IPv6-enabled firewalls blade ... and still doesn’t have hardware-based deep packet inspection of IPv6 traffic.

Summary: While it’s nice to have high performance packet forwarding on generic x86 architecture, the performance of software switching is definitely not an SDN showstopper. Also, keep in mind a software appliance running on a single vCPU can provide up to a few gigabits of forwarding performance, there are plenty of cores in today’s Xeon-based servers (10Gbps per physical server is thus very realistic), and not that many people have multiple 10GE uplinks from their data centers.

4 comments:

  1. Thanks to Juniper ASICs. What do you think about Juniper ASICs versus Cisco Asics ? Like , Internet Processor vs Superman and Tycho .?

    ReplyDelete
  2. One of the biggest problem with x86 based appliances is called "interface affinity", which means that each interface is mapped to one CPU receiving the interrupt.

    So if you have 16 core CPU and only one 10G interface, all the interrupts for that interfaces are handled by one core only.

    So be careful when doing a network design for a x86 only appliance.

    ReplyDelete
  3. Dan, you should take care about Intel DPDK. Thanks to its Poll Mode Driver (PMD), you don't need anymore to use interrupts and you are fully unlocked and able to scale up to 160Mpps (packet per second) with the latest Intel CPUs.

    ReplyDelete
    Replies
    1. Thanks for the pointer.

      I hope vendors will start using this. Sooner than later...

      Delete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Ivan Pepelnjak, CCIE#1354, is the chief technology advisor for NIL Data Communications. He has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced technologies since 1990. See his full profile, contact him or follow @ioshints on Twitter.