If a data cap doesn't affect 97% of users, why bother implementing it at all? Surely the 3% can be that significant?
A few of us immediately responded that the 3% could represent 80 (my guess) to 97% (@icemarkom) of the traffic. As I’m tracking my home Internet connection with MRTG for over a year, I was also able to get some hard facts (although the sample size is admittedly very small). We’re pretty heavy internet users (no limits on what my teenage kids are doing and I’m mostly working from home), but the average yearly utilization of my 20 Mbps pipe is only 180 Kbps or less than 1% of its capacity (still, over a year, that’s almost 700 GB of data or 350 months of AT&T’s DataPro plan).
The ever-widening gap between the expected average and peak Internet utilization of an average user and the access line speed is one of the topics of my Market Trends in Service Provider Networks webinar.
Now, imagine that someone deploys a “popular” FTP or BitTorrent server on his home PC. With hundreds of concurrent TCP sessions, it’s very easy to reach constant saturation of the 20 Mbps (or 100 Mbps) link. The ratio between my average Internet utilization and his is 1:100 (and, as said above, I’m probably already in the top 5%).
This time I don’t have to repeat my “All-you-can-eat Internet mentality” rant; Steve Foskett wrote two blog posts along a very similar line of thinking: The End of Unlimited Data: The Buffet and The End of Unlimited Data: Who’s Being Subsidized?
Obviously the Service Providers have two ways to handle the resource hogs: charge them or police them (for example, temporarily lowering their access speed). Most of them choose the obvious solution, trying to collect as much easy money as possible. The smart ones try to match their customer expectations by prioritizing in-contract traffic or rate-limiting specific traffic types and even showing how well they’re doing in a real-time graph.