P2P Traffic Is Bad for the Network
I’m positive you all know that. I also hope that you’re making sure it’s not hogging your enterprise network. Service Providers are not so fortunate – some Internet users claim using unlimited amounts of P2P traffic is their birthright. I don’t really care what kind of content these users transfer, they are consuming enormous amounts of network resources due to a combination of P2P client behavior (which is clearly “optimized” to grab as much as possible) and the default TCP/QoS interaction.
Let’s start with the P2P behavior:
- P2P implementations (for example, BitTorrent clients) open a large amount of simultaneous TCP sessions. 200 concurrent sessions is a very “reasonable” number and some people claim that the 10 new sessions per second limit imposed by Windows XP SP2 is reducing their speed… now you know how many sessions they tend to open.
- P2P client can saturate any link for a very long time. I’m a heavy Internet user, but I still use around 1% of my access speed (long-term average). A P2P client can bring the long-term average close to 100%.
I wouldn’t mind the reckless implementations of P2P clients if the Internet would be an infrastructure where every user gets its fair share of bandwidth. Unfortunately, the idealistic design of the early Internet ensures that (using default router settings) every TCP session gets the same amount of bandwidth. A P2P user with 200 concurrent sessions thus gets 200 times the bandwidth of another user downloading her e-mail with a POP3 session. Clearly not a win-win situation (for anyone but the P2P guy) that could easily result in “a few” annoyed calls to the SP helpdesk.
What we would need to cope with the P2P users is per-user (per-IP-address) queuing, which is not available on any router that I’m familiar with (let alone on any high-speed platform). If you have other information, please share it in the comments.
The best solution I’m aware of is a dedicated box that can perform per-user measuring with real-time actions. Unfortunately, they are usually to expensive to deploy at every bottleneck in the network; they are usually deployed somewhere between the access and core network with plenty of bandwidth surrounding them.
To address the P2P challenge with bandwidth control devices, you could define very smart bandwidth groups or you track per-user quotas (actually per-IP-address quotas) and act once a user exceeds her quota – for example, by hard-limiting the user’s bandwidth or remarking her packets with a low-priority DSCP value which can then be used to select a different (low-priority) queue on congested links. That’s exactly the approach Comcast took in the end and documented in RFC 6057.
Have a look at the following (apologies if you've known all that, but to me it didn't appear to have come through in your post):
1) http://www.zdnet.com/blog/storage/p4p-faster-smarter-p2p/303
2) http://en.wikipedia.org/wiki/P2P_caching
3) http://blog.bittorrent.com/2009/10/05/changing-the-game-with-%CE%BCtp/
As for the telecom industry: they are in a vise into which they put themselves.
P4P solves the long-haul congestion and might improve the situation for hub-and-spoke access networks. Supposedly it makes the situation worse on cable (shared media) networks, because it's easier to saturate the cable with locally-sourced traffic.
P2P caching is an interesting idea. It would be interesting to see how big a cache you need to have reasonable improvements.
uTP - need to study the details (assuming they are published anywhere). As long as it runs on TCP, it's only a marginal improvement. TCP should back off automatically when experiencing delays; the problem is the large discrepancy between the number of TCP sessions created by a P2P user versus the number of sessions a regular web/e-mail user has opened (and their duration).
http://torrentfreak.com/facebook-uses-bittorrent-and-they-love-it-100625/
Next, the whole reason Internet works as well as it does is the presumed cooperative behavior of its users that allowed designers to bypass numerous checks-and-balances that burdened the traditional telco technologies. If we can no longer rely on cooperative behavior, the costs of the Internet infrastructure will go up, as those checks will have to be implemented.
As I wrote (I hope you did consider my technical arguments), the solution is per-IP-address queuing, which is not commonly implemented.
Last but not least, the root cause of all the problems we're discussing is that THERE IS NO CONTRACT. The only parameter promised by an ISP is the access speed to the first router (and in a cable network, not even that). Everything else is implicitly assumed by one or the other party.
As soon as we have per-user or per-IP-address queuing or bandwidth allocation, P2P would stop being a problem and IPSec would not help you a bit.
I'm also positive (and I am sure you'd agree with me) that Facebook or Twitter would not allow their BitTorrent traffic to run rampant squeezing out all other traffic from their network.
They have at least two options to control BT bandwidth utilization: It's quite easy to tightly control BT clients. You can specify # of TCP sessions (per-torrent or globally), per-torrent bandwidth, total bandwidth ... and you can mark desired BT traffic in a private network and queue it accordingly.
However, while I absolutely enjoy the information you've shared, I fail to see how it's relevant to our discussion. It's like saying viruses are good because some retroviruses are used as DNA carriers in genetic engineering.
I was not speaking about the protocols, but about the uncontrolled use of these protocols in an environment that was designed for a cooperative use.
http://www.net.t-labs.tu-berlin.de/research/isp-p2p/
Killed instantly from the audience, not to mention.
http://www.faqs.org/rfcs/rfc3514.html
http://blog.ioshints.info/2008/04/rfc-3514-implemented-by-asr-series-of.html
Now, here's a good question: does SCE support IPv6 ... not sure :-P, need to ask our gurus.
Salute from IEF78 @Maastricht.
How would you define fairness (e.g. egalitarian/utilitarian approaches) among ISP customers? What scale do you define fairness at (link, aggregation point, ISP, multiple-ISPs, Internet)? Is it possible to implement fairness at all, when ISP oversubscribe their networks and offer "unlimited" contracts to customers, based on statistical assumptions which are never satisfied asymptotically?
The above questions are fundamental to any statistical multiplexing network that uses oversubscription to realize better value. The intuitive fairness is broken at the very beginning! :) All proposed solutions (e.g. changing flow mask) may only help to alleviate, but not solve the problem, as no one knows how fairness should be truly implemented in the deceptive world of ISP services :).
Here is an example: let's say you aggregate flows based on customer IP address to deafeat single hosts sourcing barrage of flows. With the flow-fairness model it will give an enterprise customer with more hosts advantage over the customer having less hosts in their networks. It's the same problem, just different scale. The flow-fairness simple does not work as it should be. Thus, any solution within flow-fairness that works at the small scale may not work at a larger scale.
Finally, I would recommend reading the following draft documet:
http://bobbriscoe.net/projects/refb/draft-briscoe-tsvarea-fair-02.html
It offers interesting discussion and some rather novel ideas for fairness management.
Here's the uTP protocol description for your enjoyment: http://bittorrent.org/beps/bep_0029.html . No, it does not use TCP. :)
Regarding the cache size for P2P - I think you might be surprised that you don't need terribly large storage to get high hit ratios. There is a tendency to download stuff that is "popular" at a given time, and a few terabytes of storage (well below 10) will easily take care of that. The reason behind this is *why* people download certain content - it is most often because their friends are talking about it. Particular movie, show or music album. The rest of the available content lives in relative obscurity.
Also, most P2P caching solutions I've heard about provide mechanisms for controlling the download rate without the need to use DPI solutions.
And, of course, there are signature-based DPI solutions, which can enforce QoS policies directly, without the need to cache anything.
As Ivan has pointed out in a previous comment, there is no contract other than an implicit access rate to the first hop router.
Maybe there should be one? Blaming the users for their interpretation of the marketing materials is crazy. Users have no idea what the network is actually capable of, or what's expected of them. All they know is what's printed in 10m tall print on a billboard.
Blame your marketing department for this debacle, not your users. :-P
It's an attitude that reeks of large, top-down enterprise, where IT is in charge and fucking up the network will get you fired...as opposed to a competitive market where you have to innovate and serve your customers, or you will go out of business.
I'm saying the network sucks because it can't let people do what they want to do. Users *should* be able to do whatever they damned well please, and if the technology doesn't exist to let them, then that's a tremendous opportunity for engineers to tell the "hey, I help allow [this torrenty-like-thing we all use and love now]" story 20 years from now, and businesses to make a shitload of cash in the meantime.
Perhaps also a way for the IETF to atone for PIM ;-)
* fine-tune your BitTorrent settings to consume 1/10 to 1/4 of your access speed and limit the number of TCP connections to 50-100.
* implement the same fine-tuning mechanism automatically in BitTorrent client;
* use BitTorrent client with uTP (supposedly it works better than BT over TCP, need to study it a bit)
* accept that BT traffic is classified as lower-priority traffic. Not rate-limited, not throttled, not dropped, just easily recognizable and lower priority. As BT represents 50-60% of Internet traffic, giving the other traffic priority will not hurt it badly.
These modifications would make [this torrenty-thing we all use] more lovable without a major sacrifice. Unfortunately, the argument has left the technical waters years ago and became political (best case) or (usually) religious.
Of course some SPs (particularly those that love ridiculously low usage caps) would still try all sorts of tricks to raise their ARPU, but the market will eventually deal with them.
I am one of them. I work as a network security admin - but when I am at home I am a paying customer.