Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Building network automation solutions

6 week online course

reserve a seat
back to overview

Per-Packet Load Balancing on WAN links

One of my readers got an interesting idea: he’s trying to make the most of his WAN links by doing per-packet load balancing between a 30 Mbps and a 50 Mbps link. Not exactly surprisingly, the results are not what he expected.

The obvious problems

Per-packet load balancing on stateless packet-by-packet devices (routers or switches) is inherently a bad idea, as it inevitably results in packet reordering and reduced TCP throughput (I won’t even try to figure out what it could do to some UDP traffic). The only corner case where you might think you need it is when you’re trying to send traffic from a single (or a few) TCP sessions across multiple WAN uplinks, but even then you might get worse link utilization than if you’d have used a single uplink for the elephant TCP session due to packet reordering.

Doing stateless per-packet load balancing across unequal-bandwidth links is usually a Really Bad Idea. Ignoring the effects of packet reordering on TCP throughput you’ll never get more than N times the bandwidth of the slowest link, unless you’re using tricks that result in unequal-cost load balancing (DMZ Bandwidth with BGP, parallel MPLS-TE tunnels, or EIGRP). The proof is left as an exercise for the reader.

Finally, the bandwidth-delay product might further limit the throughput of a single TCP session. See also Mathis formula and TCP throughput calculator.

WAN optimization products (recently relabeled as Software Defined WAN) like VeloCloud (or some others) solve the problem by reassembling and reordering the packets before delivering them to the end-host, resulting in pretty decent aggregate bandwidth… and you can always use MP-TCP.

Disclosure: I totally enjoyed VeloCloud presentation @ NFD9 (this video prompted the previous paragraph). You probably know that presenting companies indirectly cover the travel expenses of NFD delegates, but that never stopped me from having my own opinions ;) More…

4 comments:

  1. My previous employer used to do this with 'bonded' ADSL lines - we used to ensure that each ADSL was fixed rate (even then you could never guarantee an actual fixed rate over old copper) but would always tell sales/customers that it would only ever perform as well as the slowest link - they didn't care - at least until it wasn't performing well enough.

    IIRC we sometimes got better results with per-dest but it was never perfect. Damn, I hated that product.

    ReplyDelete
  2. Excellent example of someone trying to solve a transport-layer problem at the network layer.

    Multipath TCP was designed for cases exactly like this one. If the network just does normal flow-based load-balancing, the routers on the asymmetrical link will simply load-balance the subflows as they normally would, and each subflow will increase its window size until it fills its pipe.

    ReplyDelete
  3. Exactly right, per-packet load balancing over stateless networks does not work. Being "on both sides" takes care of the reassembly and reorder. In addition, the injection of packets on sending is done on the basis of an intelligent algorithm that take into account the state {bandwidth, packet loss, jitter, latency} of each connected link and the application being sent. This greatly reduces the reorder problem.

    ReplyDelete
  4. Even knowing the issues of per packet load balancing ...In India, still the customers are demanding this...

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Sidebar