Nexus 1000V and vMotion

I thought Nexus 1000V is like Aspirin compared to VMware’s vSwitch, providing tons of additional functionality (including LACP and BPDU filter) and familiar NX-OS CLI. It turns out I was right in more ways than I imagined; Nexus 1000V solves a lot of headaches, but can also cause heartburn due to a particular combination of its distributed architecture and reliance on vDS object model in vCenter.

Fact#1: Nexus 1000V has a single control plane (Virtual Supervisor Module – VSM) controlling a large number of distributed data planes (Virtual Ethernet Module – VEM). The maximum number of VEMs controlled by a VSM is 64, which proves yet again that centralized control planes aren’t infinitely scalable (OpenFlow aficionados will obviously disagree).

VMware’s vNetwork Distributed Switch (vDS) can span 350 hosts because it doesn’t have a control plane; vDS is just a management-plane templating tool.

spider webVSM must be just around the corner

Fact#2: Nexus 1000V is represented as a vDS in vCenter; that’s the only way to represent a distributed switch in vCenter.

Fact#3: You cannot move a running virtual machine between hosts that are not part of the same vDS.

You can perform manual vMotion from any vSphere host to any other vSphere host if the VM you’re moving is connected to standard vSwitches (assuming port group names and configuration parameters match between the two hosts); if the VM is connected to a vDS, then the two hosts must belong to the same vDS.

It seems that this is more a vCenter limitation than a Nexus 1000V limitation. VMware probably doesn't care because its vDS spans 350 hosts … and it wouldn't be too hard to stretch it to a larger number of hosts if someone would really want to have that.

Conclusions

You can easily use Nexus 1000V in combination with DRS/HA clusters. These clusters can have at most 32 hosts (64 VEMs is thus more than enough), and the automatic vMotion and load balancing will work as expected.

Combining two DRS/HA clusters in a Nexus 1000V vDS is probably more than enough for typical enterprise data center (you can still vMotion VMs between the clusters if you have to do large-scale maintenance or software upgrade).

IaaS cloud providers might have a different view. Limiting VM mobility to two racks (yes, you can squeeze 32 UCS blades in a single rack) just doesn’t sound right.

Webinars? Sure!

Virtual networking scalability is one of the main topics of my Cloud Computing Networking – Under the Hood webinar (register for the live session).

3 comments:

  1. Simon Hamilton-Wilkes07 December, 2011 19:43

    I thought you could vMotion between vDS provided you manually ensured the source and destination port group names are the same - so if you strictly follow an internal standard that limitation is less firm.

    ReplyDelete
  2. Ivan,

    Can you do a 3 hour seminar dedicated to storage networking and all new trends in storage like Thin Provisioning, vFilers etc?

    thanks,

    ReplyDelete
  3. So did I ... and we were both wrong :'(

    Here's one more link: https://supportforums.cisco.com/thread/2107825

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Ivan Pepelnjak, CCIE#1354, is the chief technology advisor for NIL Data Communications. He has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced technologies since 1990. See his full profile, contact him or follow @ioshints on Twitter.