Why Is Network Virtualization So Hard?
We’ve been hearing how the networking is the last bastion of rigidity in the wonderful unicorn-flavored virtual world for the last few years. Let’s see why it’s so much harder to virtualize the networks as opposed to compute or storage capacities (side note: it didn’t help that virtualization vendors had no clue about networking, but things are changing).
OpenFlow Fabric Controllers Are Light-years Away from Wireless Ones
When talking about OpenFlow and the whole idea of controller-based networking, people usually say “well, it’s nothing radically new, we’ve been using wireless controllers for years and they work well, so the OpenFlow ones will work as well.”
Unfortunately, the comparison is totally misleading.
DF bit explained
Finally a good explanation of what DF bit does ;)
Layer-2 DCI with Enterasys Switches
The second half of the Enterasys DCI Solutions webinar focused on real-life case studies. First the less interesting one: long-distance live VM migration (you know my feelings about the whole concept, but sometimes you just have to do it) and the role of fabric routing and host routing in the process.
Sooner or Later, Someone Will Pay for the Complexity of the Kludges You Use
I loved listening to OTV/FabricPath/LISP Packet Pushers podcast. Ron Fuller and Russ White did a great job explaining the role of OTV, FabricPath and LISP in a stretched (inter-DC) subnet deployment scenario and how the three pieces fit together … but I couldn't stop wondering whether there is a better method to solve the underlying business need than throwing three new pretty complex technologies and associated equipment (or VDC contexts or line cards) into the mix.
Extending Layer-2 Connection into a Cloud
Carlos Asensio was facing an “interesting” challenge: someone has sold a layer-2 extension into their public cloud to one of the customers. Being a good engineer, he wanted to limit the damage the customer could do to the cloud infrastructure and thus immediately rejected the idea to connect the customer straight into the layer-2 network core ... but what could he do?
The Plexxi Challenge (or: Don’t Blame the Tools)
Plexxi has an incredibly creative data center fabric solution: they paired data center switching with CWDM optics, programmable ROADMs and controller-based traffic engineering to get something that looks almost like distributed switched version of FDDI (or Token Ring for the FCoTR fans). Not surprisingly, the tools we use to build traditional networks don’t work well with their architecture.
In a recent blog post Marten Terpstra hinted at shortcomings of Shortest Path First (SPF) approach used by every single modern routing algorithm. Let’s take a closer look at why Plexxi’s engineers couldn’t use SPF.
Combining DMVPN with Existing MPLS/VPN Network
One of the Expert Express sessions focused on an MPLS/VPN-based WAN network using OSPF as the routing protocol. The customer wanted to add DMVPN-based backup links and planned to retain OSPF as the routing protocol. Not surprisingly, the initial design had all sorts of unexpectedly complex kludges (see the case study for more details).
Having a really smart engineer on the other end of the WebEx call, I had to ask a single question: “Why don’t you use BGP everywhere” and after a short pause got back the expected reply “wow… now it all makes sense.”
Enterasys Host Routing – Optimal L3 Forwarding with VM Mobility
I spent the last few weeks blogging about the brave new overlay worlds. Time to return to VLAN-based physical reality and revisit one of the challenges of VM mobility: mobile IP addresses.
A while ago I speculated that you might solve inter-subnet VM mobility with Mobile ARP. While Mobile ARP isn’t the best idea ever invented it just might work reasonably well for environments with dozens (not millions) of virtual servers.
Enterasys decided to go down that route and implement host routing in their data center switches. For more details, watch the video from the Enterasys DCI webinar.
… updated on Wednesday, February 1, 2023 13:35 UTC
Virtual Appliance Routing – Network Engineer’s Survival Guide
Routing protocols running on virtual appliances significantly increase the flexibility of virtual-to-physical network integration – you can easily move the whole application stack across subnets or data centers without changing the physical network configuration.
Major hypervisor vendors already support the concept: VMware NSX-T edge nodes can run BGP or OSPF1, and Hyper-V gateways can run BGP. Like it or not, we’ll have to accept these solutions in the near future – here’s a quick survival guide.
Are Overlay Networking Tunnels a Scalability Nightmare?
Every time I mention overlay virtual networking tunnels someone starts worrying about the scalability of this approach along the lines of “In a data center with hundreds of hosts, do I have an impossibly high number of GRE tunnels in the full mesh? Are there scaling limitations to this approach?”
Not surprisingly, some ToR switch vendors abuse this fear to the point where they look downright stupid (but I guess that’s their privilege), so let’s set the record straight.
Routing Protocols on NSX Edge Services Router
VMware gave me early access to NSX hands-on lab a few days prior to VMworld 2013. The lab was meant to demonstrate the basics of NSX, from VXLAN encapsulation to cross-subnet flooding, but I quickly veered off the beaten path and started playing with routing protocols in NSX Edge appliances.
What is VMware NSX?
Answer#1: An overlay virtual networking solution providing logical bridging (aka layer-2 forwarding or switching), logical routing (aka layer-3 switching), distributed or centralized firewalls, load balancers, NAT and VPNs.
Answer#2: A merger of Nicira NVP and VMware vCNS (a product formerly known as vShield).
Oh, and did I mention it’s actually two products, not one?
Design Options in Dual-Stack Data Centers
Tore Anderson started his part of the IPv6-Only Data Centers webinar with a comprehensive analysis of numerous design options you have when implementing dual-stack access to your data center.
Unless you decided to live under a rock for the next 20 years or plan to drop out of networking in the very near future, you simply (RFC 2119) MUST watch this video.
50 Shades of Statefulness
A while ago Greg Ferro wrote a great article describing integration of overlay and physical networks in which he wrote that “an overlay network tunnel has no state in the physical network”, triggering an almost-immediate reaction from Marten Terpstra (of RIPE fame, now @ Plexxi) arguing that the network (at least the first ToR switch) knows the MAC and IP address of hypervisor host and thus has at least some state associated with the tunnel.
Marten is correct from a purely scholastic perspective (using his argument, the network keeps some state about TCP sessions as well), but what really matters is how much state is kept, which device keeps it, how it’s created and how often it changes.
