Building network automation solutions

9 module online course

Start now!

What exactly is a Nexus 1000V?

Nexus 1000V makes a lot of people unfamiliar with the virtualization intricacies mightily confused (more so as Cisco usually discusses it together with hardware-based switches like Nexus 5000 and Nexus 7000). These are the typical questions I get from my readers:

What exactly is the Nexus 1000V? It sits in the VMware host, but how do the servers connect into it? Is it a software connection spilled out into hardware at the 1000V egress? Do you somehow track what traffic belongs to each server and put polices against them like a normal switch would?

Nexus 1000V is a replacement for VMware’s distributed switch; it’s a software-only layer-2 switch sitting inside the VMware hypervisor kernel.

It uses the same NX-OS code base and the same familar CLI as the hardware Nexus switches, so you can configure it the way you’d configure any other Nexus switch. Similarly to the supervisor/linecard architecture in a hardware switch, the Nexus 1000V uses switching modules embedded in VMware hypervisor (VEM – Virtual Ethernet Module) and control-plane software running in a VM (VSM – Virtual Switch Module). The VSM can run on any ESX host or in a dedicate appliance (Nexus 1010) and control up to 64 VEMs.

Like with VMware’s vSwitch, the virtual servers (VMs) connect to a VEM resident in the hypervisor kernel in the ESX host on which they're running through virtual ports; VEM uses existing physical Ethernet ports (pNICs) to connect to the rest of the Data Center network. As said before, Nexus 1000V is a purely software solution; there’s no switching hardware associated with it.

VEM pass-through used in VM-FEX/VN-Link is a totally different story. It uses VEM inside the VMware kernel to modify vSwitch/vDS behavior, but does not have the NX-OS-based control plane.

Deploying Nexus 1000V in your VMware-based data center gives you numerous advantages (read the relevant Cisco/VMware white paper for more details), but as it’s a software-only solution it cannot implement all the feature you’d expect in a full-blown switch made by Cisco. For example, it can do extensive QoS classifications, policing and marking, but only limited queuing (CB-WFQ based on 802.1p CoS and VMware-specific traffic types).

Update 2011-06-09: CB-WFQ support was added in software release 4.2(1) SV1(4).

More information

Basic virtualization techniques are described in my Next-Generation IP Services webinar (buy a recording). The Data Center 3.0 for Networking Engineers webinar (buy a recording) has a whole section devoted to server virtualization and its impact on LAN and SAN. Last but definitely not least, the VMware Networking Deep Dive webinar (register for an online session) has in-depth description of virtual layer-2 switching inside VMware hypervisor, and the roles of Nexus 1000V, Adapter FEX (VN-Tag) and VM-FEX (VN-Link). You can get access to all three webinars as part of the yearly subscription.


  1. I've also written a short post regarding that 1000v that gets into taking about the VEM and VSM...
  2. Good post, as usual Ivan. I also liked tonhe's write-up - good one dude.
  3. Ivan, nice article. The latest release of the 1000v supports QoS weighted fair queueing...

  4. Hi Ivan,
    I was hoping you would correct your article above, when referencing the VM-FEX. It utilizes VN-Tag(802.1qbh) not VN-Link, which is a marketing term that encompases vNICs, FI, N2K, FEX's and N1k.
    VM-FEX and Adapter FEX(vNIC aka Palo card) all use the 802.1qbh.
  5. Where exactly do you think I made a mistake?
  6. VM-FEX technology used with combining n5k wich do use NX-OS. In other words VM-FEX does have the NX-OS-based control plane, isn't it?
    1. No. VEM communicates with NX-OS, which means that VEM has an independent control plane, not NX-OS resident in the adjacent Nexus switch.
Add comment