Scalability Enhancements in Cisco Nexus 1000V

The latest release of Cisco Nexus 1000V for vSphere can handle twice as many vSphere hosts as the previous one (250 instead of 128). Cisco probably did a lot of code polishing to improve Nexus 1000V scalability, but I’m positive most of the improvement comes from interesting architectural changes.

Distributed NetFlow:

With Distributed NetFlow, the switch sends NetFlow export packets directly from the VEMs to the collectors. It no longer sends export packets through the VSM mgmt0 interface, significantly improving scalability (source).

IGMP multicast offload:

Prior to 5.2(1)SV3(1.1), VEM modules depended on VSM to support IGMP multicast. From 5.2(1)SV3(1.1), VEMs can perform IGMP mrouter detection, IGMP member addition, and deletion without VSM support (source).

Then there’s VXLAN MAC distribution:

MAC distribution uses unicast to distribute MAC, thereby reducing MAC update messages and improving scale (source).

Finally, you cannot run centralized (VSM-based) LACP anymore:

In Release 5.2(1)SV3(1.1), only LACP offload to VEM is supported (source).

It’s pretty easy to spot the pattern. Every single scalability improvement pushed some aspect of centralized control plane into distributed switching elements, nicely proving the point I made just a few days prior to Nexus 1000V 3.1 launch: centralized control plane limits scalability (note: I had no idea about the upcoming product launch – Cisco stopped briefing me a while ago, probably around the time I found NSX interesting).

I am positive someone will manage to prove that this reasoning doesn’t apply to his favorite OpenFlow controller. People also managed to prove that the Earth must be flat.

More information

Check out SDN resources on ipSpace.net, the SDN webinars and workshop, and listen to the Software Gone Wild podcast.

Add comment
Sidebar