Category: overlay networks
Are Overlay Networking Tunnels a Scalability Nightmare?
Every time I mention overlay virtual networking tunnels someone starts worrying about the scalability of this approach along the lines of “In a data center with hundreds of hosts, do I have an impossibly high number of GRE tunnels in the full mesh? Are there scaling limitations to this approach?”
Not surprisingly, some ToR switch vendors abuse this fear to the point where they look downright stupid (but I guess that’s their privilege), so let’s set the record straight.
Routing Protocols on NSX Edge Services Router
VMware gave me early access to NSX hands-on lab a few days prior to VMworld 2013. The lab was meant to demonstrate the basics of NSX, from VXLAN encapsulation to cross-subnet flooding, but I quickly veered off the beaten path and started playing with routing protocols in NSX Edge appliances.
What is VMware NSX?
Answer#1: An overlay virtual networking solution providing logical bridging (aka layer-2 forwarding or switching), logical routing (aka layer-3 switching), distributed or centralized firewalls, load balancers, NAT and VPNs.
Answer#2: A merger of Nicira NVP and VMware vCNS (a product formerly known as vShield).
Oh, and did I mention it’s actually two products, not one?
50 Shades of Statefulness
A while ago Greg Ferro wrote a great article describing integration of overlay and physical networks in which he wrote that “an overlay network tunnel has no state in the physical network”, triggering an almost-immediate reaction from Marten Terpstra (of RIPE fame, now @ Plexxi) arguing that the network (at least the first ToR switch) knows the MAC and IP address of hypervisor host and thus has at least some state associated with the tunnel.
Marten is correct from a purely scholastic perspective (using his argument, the network keeps some state about TCP sessions as well), but what really matters is how much state is kept, which device keeps it, how it’s created and how often it changes.
Networking Enhancements in Windows Server 2012 R2
The “What’s coming in Hyper-V Network Virtualization (Windows Server 2012 R2)” blog post got way too long, so I had to split it in two parts: Hyper-V Network Virtualization and the rest of the features (this post).
Nicira NVP Control Plane
In the previous posts I described how a typical overlay virtual networking data plane works and what technologies vendors use to implement the associated control plane that maps VM MAC addresses to transport IP addresses. Now let’s walk through the details of a particular implementation: Nicira Network Virtualization Platform (NVP), part of VMware NSX.
Control Plane Protocols in Overlay Virtual Networks
Multiple overlay network encapsulations are nothing more than a major inconvenience (and religious wars based on individual bit fields close to meaningless) for anyone trying to support more than one overlay virtual networking technology (just ask F5 or Arista).
The key differentiator between scalable and not-so-very-scalable architectures and technologies is the control plane – the mechanism that maps (at the very minimum) remote VM MAC address into a transport network IP address of the target hypervisor (see A Day in a Life of an Overlaid Virtual Packet for more details).
What’s Coming in Hyper-V Network Virtualization (Windows Server 2012 R2)
Right after Microsoft’s TechEd event CJ Williams kindly sent me links to videos describing new features in upcoming Windows Server (and Hyper-V) release. I would strongly recommend you watch What’s New in Windows Server 2012 R2 Networking and Deep Dive on Hyper-V Network Virtualization in Windows Server 2012 R2, and here’s a short(er) summary.
This blog post is describing futures that will ship in 2H2013. However, as all the videos mentioned above included live demos, and the preview release shipped on June 24th, it’s obvious they’re past the “it works so great in PowerPoint” stage.
A Day in a Life of an Overlaid Virtual Packet
I explain the intricacies of overlay network forwarding in every overlay-network-related webinar (Cloud Computing Networking, VXLAN deep dive...), but never wrote a blog post about them. Let’s fix that.
First of all, remember that most mainstream overlay network implementations (Cisco Nexus 1000V, VMware vShield, Microsoft Hyper-V) don’t change the intra-hypervisor network behavior: a virtual machine network interface card (VM NIC) is still connected to a layer-2 hypervisor switch. The magic happens between the internal layer-2 switch and the physical (server) NIC.
Cloud-as-an-Appliance Design
The original idea behind cloud-as-an-appliance design came from Brad Hedlund’s blog post in which he described how he’d build a greenfield Hadoop or private cloud cluster with servers connected to a Clos fabric. Throw virtual appliances into the mix and you get an extremely simple and versatile architecture:
Published on , commented on July 9, 2022
Where’s the Revolutionary Networking Innovation?
In his recent blog post Joe Onisick wrote “What network virtualization doesn’t provide, in any form, is a change to the model we use to deploy networks and support applications. [...] All of the same broken or misused methodologies are carried forward. [...] Faithful replication of today’s networking challenges as virtual machines with encapsulation tunnels doesn’t move the bar for deploying applications.”
Unicast-Only VXLAN Finally Shipping
The long-promised unicast-only VXLAN has finally shipped with the Nexus 1000V release 4.2(1)SV2(2.1) (there must be some logic behind those numbers, but they all look like madness to me). The new Nexus 1000V release brings two significant VXLAN enhancements: unicast-only mode and MAC distribution mode.
Smart Fabrics Versus Overlay Virtual Networks
With the recent plethora of overlay networking startups and Cisco Live Dynamic Fabric Architecture announcements it’s time to revisit a blog post I wrote a bit more than a year ago, comparing virtual networks and voice technologies.
They say a picture is worth a thousand words – here are a few slides from my Interop 2013 Overlay Virtual Networking Explained presentation.
Implementing Control-Plane Protocols with OpenFlow
The true OpenFlow zealots would love you to believe that you can drop whatever you’ve been doing before and replace it with a clean-slate solution using dumbest (and cheapest) possible switches and OpenFlow controllers.
In real world, your shiny new network has to communicate with the outside world … or you could take the approach most controller vendors did, decide to pretend STP is irrelevant, and ask people to configure static LAGs because you’re also not supporting LACP.
Network Virtualization and Spaghetti Wall
I was reading What Network Virtualization Isn’t1 from Jon Onisick the other day and started experiencing all sorts of unpleasant flashbacks caused by my overly long exposure to networking industry missteps and dead ends touted as the best possible solutions or architectures in the days of their glory: