SSL Termination on Virtual Appliances: Another Myth Busted

In the Can Virtual Routers Compete with Physical Hardware blog post I mentioned that SSL termination remains one of the few bastions of hardware acceleration.

Based on the comment made by RPM, it looks like I was wrong.

Here’s his reasoning:

  • OpenSSL can do close to 900 RSA signatures per second per CPU core (which coincides nicely with ~3000 TPS quoted by F5 LTM-VE);
  • A few thousand TPS might be more than enough for most web properties, particularly if you use persistent HTTP connection and TLS session resumption (so you do TLS negotiation only for truly new users).

Interested in virtual appliances and virtual network functions? Register for the NFV webinar.

Let’s sprinkle a few Fermi estimates on top of that (I know your traffic mix is totally different, but we’re looking at the big picture here).

It takes around 2MB of data to render an average web page. If you don’t want to get mixed-content warnings, you’d want to have all that data encrypted, and if you’re not using CDN, all that data has to be served from your data center.

Assuming every visitor to your web property looks at a single page (totally unrealistic) and you’re getting 1000 new visitors per second requiring 1000 TLS session negotiations, your data center has to serve 2GB of data per second (or 16+ Gbps of bandwidth)… and all you need to run those 1000 TLS session negotiations are one or two x86 cores.

But wait, there’s more. Things are getting significantly better with Elliptic Curve Cryptography, which is fast enough to allow CloudFlare to offer free SSL termination to anyone.

I think we can safely declare this myth busted ;)


  1. Netflix did some analysis of this ( and while there continue to be improvements with SSL, with a 53% capacity hit it can require twice as many resources as unencrypted service. That's probably not an issue for small sites, but a measureable (and cost) issue for larger sites.
    1. Don't forget that Netflix runs some pretty unusual load - all they do is transfer data from memory (or disk) to network.

      Most web sites spend orders of magnitude more time on processing data, so the SSL overhead becomes negligible.
    2. I am in violent agreement. TCP_sendfile optimizations only help when you serve tons of large-fle static content; most sites do not.

      Engineers should always test and measure rather than blindly follow the lore of the Interwebs.

      We used sample testing in production to roll out TLS for our streaming video, increasing the percentage every day. We never noticed any detectable increase in server CPU or end-user response time metrics. We're clearly not Netflix, but video is about 70% of traffic by volume.

      Most data centers have CPU capacity to burn anyway.
    3. The netflix argument is a boogeyman edge case. The hit from TLS is nowhere near 50% for anyone but Netflix.

      Googke observed a 1% CPU hit and a 2% bandwidth hit rolling out TLS for everything. They needed no new server hardware, much less custom ASICs.


      For the SaaS company I work for, we also could not measure the impact of enabling TLS for everything on our existing VM-based infrastructure.
  2. All the arguments are driven by price performance metrics. With 50% overhead for SSL in https/http there, Firecloud SSL termination for masses also will be dependent on price/performance metric of the service.
  3. I ran across another article supporting this notion: hardware HSMs are not much faster than OpenSSL for most practical use cases, and drastically increase cost and complexity.
    So unless you're willing/forced to go down the "private keys are only ever in the HSM" route, and pipe all of your traffic through the system that has the HSM, just use OpenSSL (or a modern version of IIS, as the Windows CryptoAPI seem to be almost as fast as OpenSSL).
Add comment