Category: virtualization
VMware NSX: Defining the Problem
Every good data center presentation starts with redefining The Problem and my VMware NSX Architecture webinar was no exception – the first section describes Infrastructure-as-a-Service Networking Requirements.
I sprinted through this section during the live session, the video with longer (and more detailed) explanation comes from the Overlay Virtual Networking webinar.
VMware NSX Architecture Videos Published
The edited videos from VMware NSX Architecture webinar are published on my demo content web site and on YouTube. Enjoy!
Overlay Virtual Networks 101
My keynote speech @ PLNOG11 conference was focused on (surprise, surprise) overlay virtual networks and described the usual motley crew: The Annoying Problem, The Hated VLAN, The Overlay Unicorn, The Control-Plane Wisdom and The Ever-Skeptic Use Case. You can view the presentation on my web site; PLNOG organizers promised video recording in mid-October.
Just in case you’re wondering why I keep coming back to PLNOG: they’re not only as good as ever; they’re getting even more creative.
Why Is Network Virtualization So Hard?
We’ve been hearing how the networking is the last bastion of rigidity in the wonderful unicorn-flavored virtual world for the last few years. Let’s see why it’s so much harder to virtualize the networks as opposed to compute or storage capacities (side note: it didn’t help that virtualization vendors had no clue about networking, but things are changing).
Enterasys Host Routing – Optimal L3 Forwarding with VM Mobility
I spent the last few weeks blogging about the brave new overlay worlds. Time to return to VLAN-based physical reality and revisit one of the challenges of VM mobility: mobile IP addresses.
A while ago I speculated that you might solve inter-subnet VM mobility with Mobile ARP. While Mobile ARP isn’t the best idea ever invented it just might work reasonably well for environments with dozens (not millions) of virtual servers.
Enterasys decided to go down that route and implement host routing in their data center switches. For more details, watch the video from the Enterasys DCI webinar.
… updated on Wednesday, February 1, 2023 13:35 UTC
Virtual Appliance Routing – Network Engineer’s Survival Guide
Routing protocols running on virtual appliances significantly increase the flexibility of virtual-to-physical network integration – you can easily move the whole application stack across subnets or data centers without changing the physical network configuration.
Major hypervisor vendors already support the concept: VMware NSX-T edge nodes can run BGP or OSPF1, and Hyper-V gateways can run BGP. Like it or not, we’ll have to accept these solutions in the near future – here’s a quick survival guide.
Are Overlay Networking Tunnels a Scalability Nightmare?
Every time I mention overlay virtual networking tunnels someone starts worrying about the scalability of this approach along the lines of “In a data center with hundreds of hosts, do I have an impossibly high number of GRE tunnels in the full mesh? Are there scaling limitations to this approach?”
Not surprisingly, some ToR switch vendors abuse this fear to the point where they look downright stupid (but I guess that’s their privilege), so let’s set the record straight.
What is VMware NSX?
Answer#1: An overlay virtual networking solution providing logical bridging (aka layer-2 forwarding or switching), logical routing (aka layer-3 switching), distributed or centralized firewalls, load balancers, NAT and VPNs.
Answer#2: A merger of Nicira NVP and VMware vCNS (a product formerly known as vShield).
Oh, and did I mention it’s actually two products, not one?
How big is a big private cloud?
During the UCS Director Overview Packet Pushers Podcast I listened to recently the participants started discussing the use cases and someone mentioned that UCS Director might not be applicable for small shops with only a few thousand VMs. Let's put that in perspective.
Networking Enhancements in Windows Server 2012 R2
The “What’s coming in Hyper-V Network Virtualization (Windows Server 2012 R2)” blog post got way too long, so I had to split it in two parts: Hyper-V Network Virtualization and the rest of the features (this post).
Control Plane Protocols in Overlay Virtual Networks
Multiple overlay network encapsulations are nothing more than a major inconvenience (and religious wars based on individual bit fields close to meaningless) for anyone trying to support more than one overlay virtual networking technology (just ask F5 or Arista).
The key differentiator between scalable and not-so-very-scalable architectures and technologies is the control plane – the mechanism that maps (at the very minimum) remote VM MAC address into a transport network IP address of the target hypervisor (see A Day in a Life of an Overlaid Virtual Packet for more details).
What’s Coming in Hyper-V Network Virtualization (Windows Server 2012 R2)
Right after Microsoft’s TechEd event CJ Williams kindly sent me links to videos describing new features in upcoming Windows Server (and Hyper-V) release. I would strongly recommend you watch What’s New in Windows Server 2012 R2 Networking and Deep Dive on Hyper-V Network Virtualization in Windows Server 2012 R2, and here’s a short(er) summary.
This blog post is describing futures that will ship in 2H2013. However, as all the videos mentioned above included live demos, and the preview release shipped on June 24th, it’s obvious they’re past the “it works so great in PowerPoint” stage.
A Day in a Life of an Overlaid Virtual Packet
I explain the intricacies of overlay network forwarding in every overlay-network-related webinar (Cloud Computing Networking, VXLAN deep dive...), but never wrote a blog post about them. Let’s fix that.
First of all, remember that most mainstream overlay network implementations (Cisco Nexus 1000V, VMware vShield, Microsoft Hyper-V) don’t change the intra-hypervisor network behavior: a virtual machine network interface card (VM NIC) is still connected to a layer-2 hypervisor switch. The magic happens between the internal layer-2 switch and the physical (server) NIC.
Cloud-as-an-Appliance Design
The original idea behind cloud-as-an-appliance design came from Brad Hedlund’s blog post in which he described how he’d build a greenfield Hadoop or private cloud cluster with servers connected to a Clos fabric. Throw virtual appliances into the mix and you get an extremely simple and versatile architecture:
Published on , commented on July 9, 2022
Where’s the Revolutionary Networking Innovation?
In his recent blog post Joe Onisick wrote “What network virtualization doesn’t provide, in any form, is a change to the model we use to deploy networks and support applications. [...] All of the same broken or misused methodologies are carried forward. [...] Faithful replication of today’s networking challenges as virtual machines with encapsulation tunnels doesn’t move the bar for deploying applications.”