How is a device throughput defined

Ali sent me a question that should bother every networking engineer:

Could you explain how Cisco [or another vendor] comes up with the throughput parameters in a products datasheet? For example if a vendor says that "if IPSec is turned on the throughput is 20Mpps", exactly what does it mean? What is the packet size he is referring to and what are the implications here, because very seldom do we have fixed packet sizes in a traffic flow.

The answer, as always, is "it depends". If you're reading a serious performance analysis report, it should document the test procedures, including the packet sizes. If you're getting a "marketing" figure with no further explanation, you can be sure it's been cooked as much as possible. For example, a Gigabit Ethernet link sometimes has 2 Gbps performance (in-and-out) and in case of IPSec packet-per-second values, they are most probably measured with optimal (in this case low) packet size.

This article is part of You've asked for it series.


  1. Sorry - 'optimal' packet sizes for use in marketing materials are large packets, around 1500 bytes or thereabouts.

    Cisco and other vendors still make use of 'iMix', even though it's both old and has never borne any relevance whatsoever to traffic profiles on any real network anywhere, at any time.

    As you indicate, what's really needed is to develop a performance envelope for a given device/interface with packet sizes/frame rates from the applicable minimum to the applicable maximum, a la RFC2544, as well as with various features enabled/disabled.
  2. For testing throughput in pps, optimal should be "as small as possible", but for testing throughput in bps, optimal should be "as large as possible not fragmented" - am I right?
  3. All three of us are basically in agreement. If you're measuring throughput in bps, it's ideal to use large packets (increasing bps @ constant pps).

    If you're measuring throughput in pps, packet sizes usually don't matter much as long as you can generate enough packets based on your bps throughput and port density. Most of the receive/send processing (which is packet-size-dependent) is done in hardware and the CPU (or ASICs) just swaps pointers to packet headers.

    For IPSec performance in pps, it's probably ideal to have small packets ... I'm assuming that the packet size affects the encryption/decryption time, which should be the major part of the per-packet processing time.
  4. Ivan, thanks for linking to the IMIX post we put up, you might also find the truth in testing series of running posts we have helpful, we try and hit on this topic a lot about what realistic testing actually means and how to question data sheets in that light.

  5. This comment has been removed by the author.
  6. There is also the flip-side to this... whenever a vendor talks to you about replacing your existing products, they tend to put the most broken mirror they can find in front of what you have and claim the existing product is inadequate. We recently experienced this from our cisco account team... they threw up 64-byte ethernet throughput numbers for one of our devices and claimed that it didn't offer as much 'throughput' as the cisco product.

    However, using 64-byte ethernet frames is hardly a good indication of throughput. It is a good way to test packet-per-second rates, but throughput is best tested with 1500 byte frames as Roland mentioned above.
  7. Well, none of the values "pps" and "bps" describing the throughput of IPsec is constant when changing the packet size. It is caused by sub-processes in the whole IPsec process: e.g. encryption could be expressed in constant value of "bps" and database lookups (SPD, SAD) in constant value of "pps". But since these sub-processes are consecutive and held by the same processor, they all "shape" the final throughput.

    Of course, it is true that the highest throughput value in "bps" is for the largest packet (~1500 B) and the highest throughput value in "pps" is for the smallest packet. And it is also true that most vendors publish the performacne of their devices in "bps" for the largest packet and less of them for iMix traffic (understandable, as the value for iMix is lower).
  8. For example, I measured on Cisco 1841 for ESP-aes with crypto-accelerator enabled:
    0.823 Mbps for 20 B payload packet
    42.61 Mbps for 1390 B payload packet
    and some 30 Mbps for iMix traffic.
  9. Interesting (absolutely valid) observation. Thank you!
  10. The Breaking Point blog post IMIX traffic usefulness has moved since the IXIA acquisition:
Add comment