Worth Reading: Discovering Issues with HTTP/2

A while ago I found an interesting analysis of HTTP/2 behavior under adverse network conditions. Not surprisingly:

When there is packet loss on the network, congestion controls at the TCP layer will throttle the HTTP/2 streams that are multiplexed within fewer TCP connections. Additionally, because of TCP retry logic, packet loss affecting a single TCP connection will simultaneously impact several HTTP/2 streams while retries occur. In other words, head-of-line blocking has effectively moved from layer 7 of the network stack down to layer 4.

What exactly did anyone expect? We discovered the same problems running TCP/IP over SSH a long while ago, but then too many people insist on ignoring history and learning from their own experience.

3 comments:

  1. It's like a snake who bites its own tail. To solve problem you have to establish more TCP sessions than your competitors. So overall you would still be faster than the others. BTW I'm glad to made in the news.
  2. Bad link. HTTP/2 is not TCP-over-TCP. The inner layer does not implement retransmissions.

    Good war story. They'll have to wait for QUIC - as mnot pointed out in the second comment :).
    Replies
    1. And the first comment is perhaps more relevant than I first thought. Google say they use BBR congestion control for all WAN RPC. Maybe that's one potential explanation for ending up with this situation in Envoy. BBR can't solve head of line blocking, but it wouldn't have the second problem "TCP window size will drop dramatically, and all streams will be simultaneously throttled down." BBR does not use packet loss as a sign of congestion. (Or, it has a heuristic to detect policers using packet loss, but they set the threshold for this to 20% :), so I would not expect it to trigger here. You can see a bulk throughput v.s. loss graph here: https://queue.acm.org/detail.cfm?id=3022184 ).
Add comment
Sidebar