P2P Traffic and the Internet, Part 2
As expected, my P2P traffic is bad for the network post generated lots of comments; from earning me another wonderful title (shill for Internet monopolies) that I’ll proudly add to my previous awards to numerous technical comments and even a link to a very creative use of BitTorrent to solve software distribution problems (thanks again, @packetlife).
Most of the commentators missed the main point of my post and somehow assumed that since I don’t wholeheartedly embrace P2P traffic I want to ban it from the Internet. Far from it, what I was trying to get across was a very simple message:
- current QoS mechanisms allow P2P clients to get disproportionate amount of bandwidth;
- per-session queuing needs to be replaced with per-user queuing;
- few devices (usually dedicated boxes) can do per-user bandwidth management.
Not surprisingly, Petr Lapukhov was even more succinct: “The root cause of P2P QoS problem is the flow-fairness and congestion-adaption model that has been in the Internet since the very first days” and thus made a great introduction to a more fundamental problem: while we’re rambling about the “popular” P2P topic, we’re forgetting that Internet was never designed to cope with what we’re throwing at it.
The basic premise commonly used in the design of Internet protocols was the cooperative user behavior. This attitude allowed Internet to become fast and cheap and finally triumph in the public data arena. While the ITU was struggling to design foolproof protocols that the users could never abuse, IETF was happily creating just-good-enough protocols that worked well between a few friends. The ultimate example: SMTP versus X.400. Unfortunately, in this case ITU had the last laugh ... with sender authentication and nonrepudiation embedded in X.400 and per-message charges billed to the sender our inboxes would be spam-free. Obviously, the victory (if there was one) was pyrrhic, as (public) X.400 has years ago shared the fate of T.rex.
Likewise, TCP was designed for an environment in which every session should get the same share of bandwidth (in the early days, one user would have one or at most a few sessions). It works astonishingly well: if you run X parallel TCP sessions across a link (even without using any decent QoS mechanism), each one will (on average) get its fair share. Low-speed queuing mechanisms like WFQ enhanced the concept to ensure that:
- TCP sessions get their fair share even when non-TCP traffic tries to grab more than expected;
- Interactive sessions are not preempted by batch sessions (with large packet bursts).
The third example: years ago, when Internet QoS became a hot topic, we had two competing architectures:
- Intserv, where each application session would have to reserve the bandwidth it needs and routers could perform explicit CAC (call admission control), even tied to an authentication server (yeah, it was probably an ITU plant) and
- Diffserv, where the core network would rely on the DSCP markers in individual packets and perform only low-granularity QoS decisions (for example, per-class queuing and intra-class selective dropping).
As we all know, Diffserv won because it scales ... but it comes with an implicit risk: the core routers have to trust the edge routers (or the users).
Last example (this one very close to my heart): BGP. It was designed to be used between cooperating entities and thus has very few security mechanisms (inbound and outbound filters are primarily a policy tool) and no authentication-of-origin mechanisms. The simple design was an obvious success: IDRP never moved far beyond whiteboard (PowerPoint was not so popular in those days), but we’re occasionally paying a high price: a router you’ve never heard about can cause Internet-wide flaps.
To summarize: Internet is as successful as it is because it’s simple and just-good-enough ... and it was designed that way because the designers assumed cooperative behavior of all parties. People that game the system might in the end force the industry to design and deploy more complex (and thus more expensive) solutions.
Our fixed broadband offering will also have varying levels of 'per_user' bandwidth controls. i won't be shocked to see pcrf's, sce's and other fancy DPI 'equipment' generally embraced by mobile data network operators being deployed by regular ISP's....more fun for the geeks I guess....
There is an explanation, though. Modern Internet ("modern", even though it technically roots in 80s) is so much built into the business processes, that making any drastic design changes becomes almost impossible. This is why we get all that weird stuff like TRILL or LISP as opposed to more "technically advanced" academic solutions that, however, do not satisfy the short-term "ROI requirements". So much more fun is living in the world of theoretical concepts than solving real-life "boring" engineering problems ;)
Even then, a large number of users will complain because they do not understand that their xMbps service is not a dedicated amount of throughput beyond the last mile. Or that the low per-Mbps price they pay is achieved thanks to oversubscription and statistical multiplexing, not just Moore's law or economies of scale. Not to mention that very little xDSL and FTTH gear is non-blocking in backplane or uplink and no cable network has a 1:1 ratio of total subscribed speeds to aggregate docsis channel capacity in the access portion.
At the moment, there is plenty of good DPI hardware that can effectively handle 10Gbps+ in a few RU's. The problem is really in getting it setup with good policies that are as non-intrusive as possible. After all, if the stat is that only the top 1-5% of users are "troublesome" because of their sustained use, the rest of your users should never notice the solution. Many vendors are now focusing on improving the control plane for their DPI ecosystem to make it more aware of current congestion by feeding the control system data on congestion events up and down stream.
If you have to solve the problem in a few years, you have to solve it within the network.
However, you have to be prepared for a very vocal minority and use transparency and fairness to deflect them.
In a wireless environment radio spectrum is a scarce resource and one pimple faced kid downloading teenage fantasies via a base station kills the experience for the rest of the subs who also pay the same fees. This is not equitable.
Most web browsing requires 128 kb/s and email 64 kb/s. P2P requires 8 to 10 Mb/s. As George Orwell mentioned in 1984, some pigs are created more equal than others...
Here are some relevant links i had found in the past:
http://spectrum.ieee.org/telecom/standards/a-fairer-faster-internet-protocol/0
http://www.zdnet.com/blog/ou/fixing-the-unfairness-of-tcp-congestion-control/1078
http://www.bobbriscoe.net/projects/refb/
http://tools.ietf.org/html/draft-briscoe-tsvarea-fair-02
imho, the biggest issue when trying to "limit" something is that the very few users that are causing the actual congestion are the ones that are the most active on social media and will complain on every chance they get.