Everyone knows that Service Providers and Enterprise networks diverged decades ago. More precisely, organizations that offer network connectivity as their core business usually (but not always) behave differently from organizations that use networking to support their core business.
Obviously, there are grey areas: from people claiming to be service providers who can’t get their act together, to departments (or whole organizations) who run enterprise networks that look a lot like traditional service provider networks because they’re effectively an internal service provider.
A more nuanced perspective
When I asked Russ White for his opinion, he provided an interesting perspective:
Back to Enterprise IT
Even outside traditional service providers we can see major differences. Paraphrasing Peter Wohlers (unfortunately his phenomenal Networking Field Day presentation wasn’t recorded) he saw three types of data center customers in 2010:
- Cloud-scale web properties who try to keep things as simple as possible because they have plenty of other headaches to deal with. These days they’d use pure layer-3 data centers running BGP to achieve the scale they need.
- Cloud providers who also try to keep things as simple as possible but already have to deal with crazy requirements like stretched subnets and workload mobility. These days they’d typically use hypervisor-based overlay virtual networks on top of pure layer-3 data centers.
- Traditional enterprises where all bets are off. As always, the bell curve applies to this category (some environments are crazier than others)… or maybe it’s a Poisson distribution with a very long tail of people who try to cram a zillion features into every box in their network to solve yet another one-off request.
During the Open Networking webinar Russ White made an interesting observation: enterprise IT will soon split into three categories along totally different lines:
- Organizations who won’t be able to afford internal compute/storage/fabric infrastructure anymore and will move as much to the (public) cloud as possible;
- Organizations that will want to retain some on-premises infrastructure and will use hyper-converged solutions or managed services like Azure Stack;
- Organizations that are big enough to afford to invest heavily in networking software to be able to control their own destiny (as opposed to moving from one failed vendor marketecture to another every 3-5 years).
I had discussions with engineers who were heavily involved in cloudy infrastructures, and they claim your AWS bill has to be in the million-per-year range before it makes sense to think about internal infrastructure (due to costs of running that infrastructure).
Russ put the low-end size for the third type of organization to ~ 10.000 server ports in the Whitebox Switching @ LinkedIn podcast and Giacomo Bernardi also mentioned thousands of boxes in Build Your Own Service Provider Gear podcast.
However, as Russ pointed out in our (private) conversation:
I tend to think around 10k ports is probably a break-over point for the rational person. For irrational ones in either direction, all bets are off. The size and quality of the ecosystem also matters – as the ecosystem becomes better, then the minimum size when building starts making sense probably goes down. I don't know how to measure or account for this. And maybe the ecosystem just won't get any better – I don't honestly know.
And a final comment from Russ:
I suspect cloud services are going to end up in the same place as the other vendors – tossing features in because they can, causing their customers to move from marketecture to marketecture over time. It's just all so new right now that we're not seeing this yet. Once the market starts to mature, though, there's going to need to be a scramble for customers, and that scramble is probably going to be feature driven, and it's not going to be any prettier than what we have now.
Disagree? Write a comment!