vSphere 5.0 new networking features: disappointing
I was sort of upset that my vacations were making me miss the VMware vSphere 5.0 launch event (on the other hand, being limited to half hour Internet access served with early morning cappuccino is not necessarily a bad thing), but after I managed to get home, I realized I hadn’t really missed much. Let me rephrase that – VMware launched a major release of vSphere and the networking features are barely worth mentioning (or maybe they’ll launch them when the vTax brouhaha subsides).
I had a really hard time finding anything networking-related in a very long list of new features and enhancements and the very slim VMware’s white paper tells you how serious VMware is about improving their networking support. Their community pages complete the picture – while other blogs have exploded with detailed descriptions of new vSphere 5.0 goodies, the last entry in the VMware Networking Blog is a month old and describes Cisco Nexus 1000v. Anyhow, let’s look at the morsels we’ve got:
LLDP support ... years after everyone else had it, including Cisco in Nexus 1000v.
Netflow support. This one might actually make those people that care about inter-VM traffic flows excited. Everyone else will probably continue using Netflow probes at the DC edge.
Port mirroring. Good one, particularly the ability to send the mirrored traffic to a VM on another host.
NETIOC enhancements. Now you can define your own traffic types that you can later use in queuing/shaping configuration. If my failing memory still serves me, we were able to configure ACL-based custom queuing in 1990.
802.1p tagging. Finally. 13 years after the standard was ratified.
And last but definitely note least, vShield Edge got static routing. Linux-based VM that is positioned as a L3 appliance providing NAT, DHCP and a few other features supports static routing. Why is that a new feature?
Interestingly, some VMware features that use the network transport got significantly better – HA was completely rewritten, vMotion supports multiple NICs and slows down hyper-active VMs, Intel’s software FCoE initiator is supported and ESXi has a firewall protecting the management plane – but the lack of networking innovation is stunning. Where’s EVB, SR-IOV or hypervisor pass-through like VM-FEX, not to mention MAC-over-IP? How about something as trivial as link aggregation with LACP? Is everyone but me (and maybe two other bloggers) happy configuring a wide range of VLANs spanning all ESXi hosts in the data center or is VMware simply not listening to the networking engineers? It looks like some people still believe every server has a very important storage adapter and a cumbersome NIC appendix.
More information
The details of VMware networking and its integration with the rest of the data center network are described in VMware Networking Deep Dive webinar.
could you please provide real life examples where LACP support is an important feature for vSphere?
Ivan has already provided an example where one VM would generate more traffic than one of the uplink has bandwidth, which is gettting less important now with more and more companies moving to 10G.
Anyway, I am glad to learn real life situations where LACP is really important.
Another reason we would really need LACP is described here:
http://blog.ioshints.info/2011/01/vswitch-in-multi-chassis-link.html