Interesting: Measuring End-to-End Latency in Web Browser
Well, CloudFlare couldn’t do it either, but fortunately modern browsers have extensive Browser Performance API, and part of that API includes request/response timing. Here’s how you can use that API to measure end-to-end latency:
- Start a bogus request to get past TCP/SSL negotiations;
- Once the bogus request completes, the HTTPS session with the server is not torn down (modern browsers use persistent HTTP connections), and it already has decent TCP window sizes (see also: Increasing TCP’s Initial Window - RFC 6928);
- Execute next HTTP request and use browser performance API to get the difference between requestStart and responseStart timestamps. Ignoring the serialization delay of the HTTP request and response, you’ll get a pretty good guesstimate of the end-to-end latency.
Interestingly, their results are pretty accurate: the latency to speed.cloudflare.com reported by my web browser was 19.9 msec, the ping-measured latency was 20.4 msec.
I find it interesting as well that measurements done this way might actually be more accurate. if you want to measure the latency over internet, when you're not in control of the network, at least in term of usefulness: you won't have the penalty (or on the contrary the articial priority) on ICMP that can be seen in some network to save some cycles on routers (or advertised a better latency than what your network can actually provide). The DPI that could be in place somewhere will have to inspect this data as well.
You're using the same protocol to measure than the one that carries the data that you actually care about, and that's a good monitoring practice IMO.