Stop reinventing the wheel and look around
Building large-scale VLANs to support IaaS services is every data center designer’s nightmare and the low number of VLANs supported by some data center gear is not helping anyone. However, as Anonymous Coward pointed out in a comment to my Building a Greenfield Data Center post, service providers have been building very large (and somewhat stable) layer-2 transport networks for years. It does seem like someone is trying to reinvent the wheel (and/or sell us more gear).
A few disclaimers and caveats first:
The service providers don’t care about the end-to-end stability of your network. They provide you with a (hopefully stable) L2 transport you’ve asked for and limit your flooding bandwidth (be it broadcasts, multicasts or unknown unicasts). If you’re smart and connect routers to the L2 transport network, you’ll have a working solution (or not – just ask Greg Ferro about VPLS services). If you bridge your sites across a L2 transport network, you’ll eventually get a total network meltdown. In the data center, we don’t have the luxury of ignoring how well the servers or applications work.
Stable large L2 networks are hard to engineer. I’ve been talking with a great engineer who actually designed and built a large L2 Service Provider network. It took them quite a while to get all the details right (remember: STP was the only game in town) and make the network rock solid.
Connectivity is Service Providers’ core business and gets the appropriate focus and design/implementation budget. Networking in a data center is usually considered to be a minor (and always way too expensive) component.
However, regardless of the differences between service provider transport networks and data center networks, what we’re trying to do in every data center that offers IaaS services relying on dumb layer-2 hypervisor switches has been done numerous times in another industry. I know that learning from others never equals the thrills of figuring it all out on your own and that networking vendors need a continuous stream of reasons to sell you more (and different) boxes ... but maybe we should stop every now and then, look around, figure out whether someone else has already solved the problem that’s bothering us, and benefit from their experience.
Kurt Bales did exactly that a few days ago – trying to solve multi-tenancy issues that exceeded VLAN limitations of Nexus 5000, he decided to use service provider gear in his data center network. I know he was cheating – he has Service Provider background – but you should read his excellent post (several times) and if you agree with his approach, start looking around – explore what the service providers are doing, what the SP networking gear is capable of doing, and start talking to the vendors that were traditionally strong in L2 service provider market ... or you might decide to wait a few months for L3-encapsulating hypervisor switches (and as soon as Martin Casado is willing to admit what they’re doing I’ll be more than happy to blog about it).
More information
You’ll find in-depth discussions of data center architectures and virtual networking in my Data Center 3.0 for Networking Engineers and VMware Networking Deep Dive.
And to Ivan's point if you have to use link-state on L2 to scale broadcast domains lots of switches and pack them with lot more systems per domain, it does look like re-inventing wheel, though of course as AC pointed out, developers are not going to change their habits so might as well re-invent it.
* Don't mesh the network too much (dual trees work best)
* Use 802.1ah (MAC-in-MAC) not 802.1ad (Q-in-Q). With MAC-in-MAC the core switches don't need to know the customer's MAC addresses (and you can fine-tune the broadcast domains)
Not sure what L2 link-state protocols you have in mind. The first SPB (802.1aq) products have just started to appear.
I had privilege (luck?) to participate in building a fairly large L2 network (100+ nodes across fairly large geography - probably 150km+ between furthest nodes) in early 2000s. It was built using traditional enterprise switches, and served connectivity between two main hubs and the rest of the nodes with bandwidths around 10-100 Mbit/s per minor node. There was no communication between minor nodes at L2. Each minor node sat on one or more VLANs which terminated at both main nodes. At all sites hand-off to the customer was L2, and it was up to them how to connect it (router or switch). Memory is starting to fail me as I wasn't involved with operating that network much, but from what I remember there were definitely more than a couple of MACs visible per node in CAM tables, but not hundreds.
So to answer your question: in my case it was a large number of switches, a decent number of L2 domains, with not too many MACs in each domain.
Network was controlled by MSTP.