Category: virtualization
NVGRE – because one standard just wouldn’t be enough
Two weeks after VXLAN (backed by VMware, Cisco, Citrix and Red Hat) was launched at VMworld, Microsoft, Intel, HP & Dell published NVGRE draft (Arista and Broadcom are cleverly sitting on both chairs) which solves the same problem in a slightly different way.
If you’re still wondering why we need VXLAN and NVGRE, read my VXLAN post (and the one describing how VXLAN, OTV and LISP fit together).
It’s obvious the NVGRE draft was a rushed affair, its only significant and original contribution to knowledge is the idea of using the lower 24 bits of the GRE key field to indicate the Tenant Network Identifier (but then, lesser ideas have been patented time and again). Like with VXLAN, most of the real problems are handwaved to other or future drafts.
Nexus 1000V LACP offload and the dangers of in-band control
A while ago someone sent me the following comment as part of a lengthy discussion focusing on Nexus 1000V: “My SE tells me that the latest 1000V release has rewritten the LACP code so that it operates entirely within the VEM. VSM will be out of the picture for LACP negotiations. I guess there have been problems.”
If you’re not convinced you should be running LACP between the ESX hosts and the physical switches, read this one (and this one). Ready? Let’s go.
VXLAN, OTV and LISP
Immediately after VXLAN was announced @ VMworld, the twittersphere erupted in speculations and questions, many of them focusing on how VXLAN relates to OTV and LISP, and why we might need a new encapsulation method.
VXLAN, OTV and LISP are point solutions targeting different markets. VXLAN is an IaaS infrastructure solution, OTV is an enterprise L2 DCI solution and LISP is ... whatever you want it to be.
VXLAN: MAC-over-IP-based vCloud networking
In one of my vCloud Director Networking Infrastructure rants I wrote “if they had decided to use IP encapsulation, I would have applauded.” It’s time to applaud: Cisco has just demonstrated Nexus 1000V supporting MAC-over-IP encapsulation for vCloud Director isolated networks at VMworld, solving at least some of the scalability problems MAC-in-MAC encapsulation has.
Nexus 1000V VEM will be able to (once the new release becomes available) encapsulate MAC frames generated by virtual machines residing in isolated segments into UDP packets exchanged between VEMs.
Soft Switching Might not Scale, but We Need It
Following a series of soft switching articles written by Nicira engineers (hint: they are using a similar approach as Juniper’s QFabric marketing team), Greg Ferro wrote a scathing Soft Switching Fails at Scale reply.
While I agree with many of his arguments, the sad truth is that with the current state of server infrastructure virtualization we need soft switching regardless of the hardware vendors’ claims about the benefits of 802.1Qbg (EVB/VEPA), 802.1Qbh (port extenders) or VM-FEX.
VM-FEX – not as convoluted as it looks
Reading Cisco’s marketing materials, VM-FEX (the feature probably known as VN-Link before someone went on a FEX-branding spree) seems like a fantastic idea: VMs running in an ESX host are connected directly to virtual physical NICs offered by the Palo adapter and then through point-to-point virtual links to the upstream switch where you can deploy all sorts of features the virtual switch embedded in the ESX host still cannot do. As you might imagine, the reality behind the scenes is more complex.
High Availability Fallacies
I’ve already written about the stupidities of risking the stability of two data centers to enable live migration of “mission critical” VMs between them. Now let’s take the discussion a step further – after hearing how critical the VM the server or application team wants to migrate is, you might be tempted to ask “and how do you ensure its high availability the rest of the time?” The response will likely be along the lines of “We’re using VMware High Availability” or even prouder “We’re using VMware Fault Tolerance to ensure even a hardware failure can’t bring it down.”
VLANs used by Nexus 1000V
Chris sent me an interesting question:
Imagine L2 traffic between two VMs on different ESX hosts, both using Nexus 1000V. Will the physical switches see the traffic with source and destination MACs matching the VM’s vNICs or traffic on NX1000V “packet” VLAN between VEMs (in this case, the packet VLAN would act as a virtual backplane)?
Imagine the Ruckus When the Hypervisor Vendors Wake Up
It seems that most networking vendors consider the Flat Earth architectures the new bonanza. Everyone is running to join the gold rush, from Cisco’s FabricPath and Brocade’s VCS to HP’s IRF and Juniper’s upcoming QFabric. As always, the standardization bodies are following the industry with a large buffet of standards to choose from: TRILL, 802.1ag (SPB), 802.1Qbg (EVB) and 802.1bh (Port extenders).
Building a Greenfield Data Center
The following design challenge landed in my Inbox not too long ago:
My organization is the in the process of building a completely new data center from the ground up (new hardware, software, protocols ...). We will currently start with one site but may move to two for DR purposes. What DC technologies should we be looking at implementing to build a stable infrastructure that will scale and support technologies you feel will play a big role in the future?
In an ideal world, my answer would begin with “Start with the applications.”
vSphere 5.0 new networking features: disappointing
I was sort of upset that my vacations were making me miss the VMware vSphere 5.0 launch event (on the other hand, being limited to half hour Internet access served with early morning cappuccino is not necessarily a bad thing), but after I managed to get home, I realized I hadn’t really missed much. Let me rephrase that – VMware launched a major release of vSphere and the networking features are barely worth mentioning (or maybe they’ll launch them when the vTax brouhaha subsides).
Hypervisors use promiscuous NIC mode – does it matter?
Chris Marget sent me the following interesting observation:
One of the things we learned back at the beginning of Ethernet is no longer true: hardware filtering of incoming Ethernet frames by the NICs in Ethernet hosts is gone. VMware runs its NICs in promiscuous mode. The fact that this Networking 101 level detail is no longer true kind of blows my mind.
So what exactly is going on and does it matter?
vCider: climbing the virtual networking mountain
You probably know the old saying – if the mountain doesn’t want to come to you, you have to go out there and climb it. vCider, a brand-new startup launching their product at Gigaom Structure Launchpad, decided to do something similar in the server virtualization (Infrastructure-as-a-Service; IaaS) space – its software allows IaaS customers to build their own virtual layer-2 networks (let’s call then vSubnets) on top of IaaS provider’s IP infrastructure; you can even build a vSubnets between VMs running within your enterprise network (private cloud in the cloudy lingo) and those running within Amazon EC2 or Rackspace.
Full disclosure: Chris Marino from vCider got in touch with me in early June. I found the idea interesting, he helped me understand their product (even offered a test run, but I chose to trust the technical information available on their web site and passed to me in e-mails and phone calls), and I decided to write about it. That’s it.
Automatic edge VLAN provisioning with VM Tracer from Arista
One of the implications of Virtual Machine (VM) mobility (as implemented by VMware’s vMotion or Microsoft’s Live Migration) is the need to have the same VLAN configured on the access ports connected to the source and the target hypervisor hosts. EVB (802.1Qbg) provides a perfect solution, but it’s questionable when it will leave the dreamland domain. In the meantime, most environments have to deploy stretched VLANs ... or you might be able to use hypervisor-aware features of your edge switches, for example VM Tracer implemented in Arista EOS.
Test your VMware networking skills
Two vSwitch portgroup-related questions:
- Can you configure the same VLAN on two portgroups in the same vSwitch? How about vDS?
- Can VMs attached to two different portgroups in the same ESX host talk to each other directly or do they have to go communicate through an external switch (or L3 device)?
Got your answers? Now click the Read more ... link.