Category: workshop
Why Is Network Virtualization So Hard?
We’ve been hearing how the networking is the last bastion of rigidity in the wonderful unicorn-flavored virtual world for the last few years. Let’s see why it’s so much harder to virtualize the networks as opposed to compute or storage capacities (side note: it didn’t help that virtualization vendors had no clue about networking, but things are changing).
Sooner or Later, Someone Will Pay for the Complexity of the Kludges You Use
I loved listening to OTV/FabricPath/LISP Packet Pushers podcast. Ron Fuller and Russ White did a great job explaining the role of OTV, FabricPath and LISP in a stretched (inter-DC) subnet deployment scenario and how the three pieces fit together … but I couldn't stop wondering whether there is a better method to solve the underlying business need than throwing three new pretty complex technologies and associated equipment (or VDC contexts or line cards) into the mix.
The Plexxi Challenge (or: Don’t Blame the Tools)
Plexxi has an incredibly creative data center fabric solution: they paired data center switching with CWDM optics, programmable ROADMs and controller-based traffic engineering to get something that looks almost like distributed switched version of FDDI (or Token Ring for the FCoTR fans). Not surprisingly, the tools we use to build traditional networks don’t work well with their architecture.
In a recent blog post Marten Terpstra hinted at shortcomings of Shortest Path First (SPF) approach used by every single modern routing algorithm. Let’s take a closer look at why Plexxi’s engineers couldn’t use SPF.
Enterasys Host Routing – Optimal L3 Forwarding with VM Mobility
I spent the last few weeks blogging about the brave new overlay worlds. Time to return to VLAN-based physical reality and revisit one of the challenges of VM mobility: mobile IP addresses.
A while ago I speculated that you might solve inter-subnet VM mobility with Mobile ARP. While Mobile ARP isn’t the best idea ever invented it just might work reasonably well for environments with dozens (not millions) of virtual servers.
Enterasys decided to go down that route and implement host routing in their data center switches. For more details, watch the video from the Enterasys DCI webinar.
Are Overlay Networking Tunnels a Scalability Nightmare?
Every time I mention overlay virtual networking tunnels someone starts worrying about the scalability of this approach along the lines of “In a data center with hundreds of hosts, do I have an impossibly high number of GRE tunnels in the full mesh? Are there scaling limitations to this approach?”
Not surprisingly, some ToR switch vendors abuse this fear to the point where they look downright stupid (but I guess that’s their privilege), so let’s set the record straight.
Routing Protocols on NSX Edge Services Router
VMware gave me early access to NSX hands-on lab a few days prior to VMworld 2013. The lab was meant to demonstrate the basics of NSX, from VXLAN encapsulation to cross-subnet flooding, but I quickly veered off the beaten path and started playing with routing protocols in NSX Edge appliances.
What is VMware NSX?
Answer#1: An overlay virtual networking solution providing logical bridging (aka layer-2 forwarding or switching), logical routing (aka layer-3 switching), distributed or centralized firewalls, load balancers, NAT and VPNs.
Answer#2: A merger of Nicira NVP and VMware vCNS (a product formerly known as vShield).
Oh, and did I mention it’s actually two products, not one?
50 Shades of Statefulness
A while ago Greg Ferro wrote a great article describing integration of overlay and physical networks in which he wrote that “an overlay network tunnel has no state in the physical network”, triggering an almost-immediate reaction from Marten Terpstra (of RIPE fame, now @ Plexxi) arguing that the network (at least the first ToR switch) knows the MAC and IP address of hypervisor host and thus has at least some state associated with the tunnel.
Marten is correct from a purely scholastic perspective (using his argument, the network keeps some state about TCP sessions as well), but what really matters is how much state is kept, which device keeps it, how it’s created and how often it changes.
Networking Enhancements in Windows Server 2012 R2
The “What’s coming in Hyper-V Network Virtualization (Windows Server 2012 R2)” blog post got way too long, so I had to split it in two parts: Hyper-V Network Virtualization and the rest of the features (this post).
Nicira NVP Control Plane
In the previous posts I described how a typical overlay virtual networking data plane works and what technologies vendors use to implement the associated control plane that maps VM MAC addresses to transport IP addresses. Now let’s walk through the details of a particular implementation: Nicira Network Virtualization Platform (NVP), part of VMware NSX.
What’s Coming in Hyper-V Network Virtualization (Windows Server 2012 R2)
Right after Microsoft’s TechEd event CJ Williams kindly sent me links to videos describing new features in upcoming Windows Server (and Hyper-V) release. I would strongly recommend you watch What’s New in Windows Server 2012 R2 Networking and Deep Dive on Hyper-V Network Virtualization in Windows Server 2012 R2, and here’s a short(er) summary.
This blog post is describing futures that will ship in 2H2013. However, as all the videos mentioned above included live demos, and the preview release shipped on June 24th, it’s obvious they’re past the “it works so great in PowerPoint” stage.
Optimal Layer-3 Forwarding with Active/Active VRRP (Enterasys Fabric Routing)
Enterasys implemented optimal layer-3 forwarding with an interesting trick: they support VRRP like any other switch vendor, but allow you to make all members of a VRRP group active forwarders regardless of their status.
Apart from a slightly more synchronized behavior, their implementation doesn’t differ much from Arista’s Virtual ARP, and thus shares the same design and deployment caveats.
For more information, watch the Fabric Routing video from the Enterasys Robust Data Center Interconnect Solutions webinar.
A Day in a Life of an Overlaid Virtual Packet
I explain the intricacies of overlay network forwarding in every overlay-network-related webinar (Cloud Computing Networking, VXLAN deep dive...), but never wrote a blog post about them. Let’s fix that.
First of all, remember that most mainstream overlay network implementations (Cisco Nexus 1000V, VMware vShield, Microsoft Hyper-V) don’t change the intra-hypervisor network behavior: a virtual machine network interface card (VM NIC) is still connected to a layer-2 hypervisor switch. The magic happens between the internal layer-2 switch and the physical (server) NIC.
Can I Use Shared (RFC 6598) IPv4 Address Space Within My Network?
Andrew sent me the following question: “I'm pushing to start a conversation about IPv6 in my organization, but meanwhile I've no RFC 1918 space left. What's your take on 100.64.0.0/10 - it's seems like this is available for RFC 1918 purposes, even if not intentionally?”
Short answer: Don’t even think about that!
Cloud-as-an-Appliance Design
The original idea behind cloud-as-an-appliance design came from Brad Hedlund’s blog post in which he described how he’d build a greenfield Hadoop or private cloud cluster with servers connected to a Clos fabric. Throw virtual appliances into the mix and you get an extremely simple and versatile architecture: