Category: virtualization
Hot and Cold VM Mobility
Another day, another interesting Expert Express engagement, another stretched layer-2 design solving the usual requirement: “We need inter-DC VM mobility.”
The usual question: “And why would you want to vMotion a VM between data centers?” with a refreshing answer: “Oh, no, that would not work for us.”
NEC Launched a Virtual OpenFlow Switch – Does It Matter?
On January 22nd NEC launched another component of their ProgrammableFlow architecture: a virtual switch for Hyper-V 3.0 environment. The obvious questions to ask would be: (a) why do we care and (b) how’s that different from Nicira or BigSwitch.
TL&DR summary: It depends.
Hyper-V Network Virtualization (HNV/NVGRE): Simply Amazing
In August 2011, when NVGRE draft appeared mere days after VXLAN was launched, I dismissed it as “more of the same, different encapsulation, vague control plane”. Boy was I wrong … and pleasantly surprised when I figured out one of the major virtualization vendors actually did the right thing.
TL;DR Summary: Hyper-V Network Virtualization is a layer-3 virtual networking solution with centralized (orchestration system based) control plane. Its scaling properties are thus way better than VXLAN’s (or Nicira’s … unless they implemented L3 forwarding since the last time we spoke).
VXLAN Gateways
Mark Berly, the guest star of my VXLAN Technical Deep Dive webinar focused on VXLAN gateways. Here’s the first part of his presentation, explaining what VXLAN gateways are and where you’d need them.
VXLAN Is Not a Data Center Interconnect Technology
In a comment to the Firewalls in a Small Private Cloud blog post I wrote “VXLAN is NOT a viable inter-DC solution” and Jason wasn’t exactly happy with my blanket response. I hope Jason got a detailed answer in the VXLAN Technical Deep Dive webinar, here’s a somewhat shorter explanation.
What Exactly Are Virtual Firewalls?
Kaage added a great comment to my Virtual Firewall Taxonomy post:
And many of physical firewalls can be virtualized. One physical firewall can have multiple virtual firewalls inside. They all have their own routing table, rule base and management interface.
He’s absolutely right, but there’s a huge difference between security contexts (to use the ASA terminology) and firewalls running in VMs.
Virtual Firewall Taxonomy
Based on readers’ comments and recent discussions with fellow packet pushers, it seems the marketing departments and industry press managed to thoroughly muddy the virtualized security waters. Trying to fix that, here’s my attempt at virtual firewall taxonomy.
Firewalls in a Small Private Cloud
Mrs. Y, the network security princess, sent me an interesting design challenge:
We’re building a private cloud and I'm pushing for keeping east/west traffic inside the cloud. What are your opinions on the pros/cons of keeping east/west traffic in the cloud vs. letting it exit for security/routing?
Short answer: it depends.
VM-level IP Multicast over VXLAN
Dumlu Timuralp (@dumlutimuralp) sent me an excellent question:
I always get confused when thinking about IP multicast traffic over VXLAN tunnels. Since VXLAN already uses a Multicast Group for layer-2 flooding, I guess all VTEPs would have to receive the multicast traffic from a VM, as it appears as L2 multicast. Am I missing something?
Short answer: no, you’re absolutely right. IP multicast over VXLAN is clearly suboptimal.
Arista launches the first hardware VXLAN termination device
Arista is launching a new product line today shrouded in mists of SDN and cloud buzzwords: the 7150 series top-of-rack switches. As expected, the switches offer up to 64 10GE ports with wire speed L2 and L3 forwarding and 400 nanosecond(!) latency.
Also expected from Arista: unexpected creativity. Instead of providing a 40GE port on the switch that can be split into four 10GE ports with a breakout cable (like everyone else is doing), these switches group four physical 10GE SFP+ ports into a native 40GE (not 4x10GE LAG) interface.
But wait, there’s more...
Dear VMware, BPDU Filter != BPDU Guard
A while ago I described the need for BPDU guard in hypervisor switches, and not surprisingly got a number of “it’s there” tweets seconds after vSphere 5.1 (which includes BPDU filter) was launched. Rickard Nobel also did a magnificent job of replicating the problem my blog post is describing and verifying vSphere 5.1 stops a BPDU denial-of-service attack.
Unfortunately, BPDU filter is not the same feature as BPDU guard. Here’s why.
Midokura’s MidoNet: a Layer 2-4 virtual network solution
Almost everyone agrees the current way of implementing virtual networks with dumb hypervisor switches and top-of-rack kludges (including Edge Virtual Bridging – EVB or 802.1Qbg – and 802.1BR) doesn’t scale. Most people working in the field (with the notable exception of some hardware vendors busy protecting their turfs in the NVO3 IETF working group) also agree virtual networks running as applications on top of IP fabric are the only reasonable way to go ... but that’s all they currently agree upon.
VXLAN and OTV: I’ve been suckered
When VXLAN came out a year ago, a lot of us looked at the packet format and wondered why Cisco and VMware decided to use UDP instead of more commonly used GRE. One explanation was evident: UDP port numbers give you more entropy that you can use in 5-tuple-based load balancing. The other explanation looked even more promising: VXLAN and OTV use very similar packet format, so the hardware already doing OTV encapsulation (Nexus 7000) could be used to do VXLAN termination. Boy have we been suckered.
Update 2015-07-12: NX-OS 7.2.0 supports OTV encapsulation with VXLAN-like headers on F3 linecards. See OTV UDP Encapsulation for more details (HT: Nik Geyer).
802.1BR – same old, same old
A while ago, a tweet praising the wonders of 802.1BR piqued my curiosity. I couldn’t resist downloading the latest draft and spending a few hours trying to decipher IEEE language (as far as the IEEE drafts go, 802.1BR is highly readable) ... and it was déjà vu all over again.
Short summary: 802.1BR is repackaged and enhanced 802.1Qbh (or the standardized version of VM-FEX). There’s nothing fundamentally new that would have excited me.
OpenStack/Quantum SDN-based virtual networks with Floodlight
A few years before MPLS/VPN was invented, I’d worked with a service provider who wanted to offer L3-based (peer-to-peer) VPN service to their clients. Having a single forwarding table in the PE-routers, they had to be very creative and used ACLs to provide customer isolation (you’ll find more details in the Shared-router Approach to Peer-to-peer VPN Model section of my MPLS/VPN Architectures book).
Now, what does that have to do with OpenFlow, SDN, Floodlight and Quantum?