Category: virtualization
Virtualized Squashed Complexity Sausage
Straight from RFC 6670 (section 3.4):
[...] as is usually the case with communications technologies, simplification in one element of the system introduces an increase (possibly a non-linear one) in complexity elsewhere. This creates the "squashed sausage" effect, where reduction in complexity at one place leads to significant increase in complexity at a remote location.
This is probably the most concise description of the great idea of using long-distance vMotion for “mission-critical” craplications, and applies equally well to the kludges used to compensate the simplicity of virtual switches.
VMware buys Nicira: a Hypervisor Vendor Woke Up
Almost a year ago, I predicted that eventually the hypervisor vendors will wake up and realize it’s time to get rid of VLANs and decouple virtual networks from the physical world. We’ve got the first glimpse of the brave new world a few weeks after that post was published with the VXLAN launch, but that was still a Cisco’s solution running on top of VMware’s (and now everyone else’s) hypervisor. The recent VMware’s acquisition of Nicira proves that VMware finally woke up big time.
Long-Distance Workload Mobility in Perspective
Sometime in 2012, Chuck Hollis described how some of EMC customers use long-distance workload mobility. Not surprisingly, he focused on the VPLEX Metro part of the solution and didn’t even mention the earth-flattening requirements this idea imposes on the network. I guess you already know my views on that topic, but regardless of my personal opinions, he got me curious.
Does CPU-based forwarding performance matter for SDN?
David Le Goff sent me several great SDN-related questions. Here’s the first one:
What is your take on the performance issue with software-based equipment when dealing with general purpose CPU only? Do you see this challenge as a hard stop to SDN business?
Short answer (as always) is it depends. However, I think most people approach this issue the wrong way.
Could MPLS-over-IP replace VXLAN or NVGRE?
A lot of engineers are concerned with what seems to be frivolous creation of new encapsulation formats supporting virtual networks. While STT makes technical sense (it allows soft switches to use existing NIC TCP offload functionality), it’s harder to figure out the benefits of VXLAN and NVGRE. Scott Lowe wrote a great blog post recently where he asked a very valid question: “Couldn’t we use MPLS over GRE or IP?” We could, but we wouldn’t gain anything by doing that.
Big Switch and Overlay Networks
A few days ago Big Switch announced they’ll support overlay networks in their upcoming software release. After a brief “told you so” moment (because virtual networks in physical devices don’t scale all that well) I started wondering whether they simply gave up and decided to become a Nicira copycat, so I was more than keen to have a brief chat with Kyle Forster (graciously offered by Isabelle Guis).
Layer-2 Network Is a Single Failure Domain
This topic has been on my to-write list for over a year and its working title was phrased as a question, but all the horror stories you’ve shared with me over the last year or so (some of them published in my blog) have persuaded me that there’s no question – it’s a fact.
If you think I’m rephrasing the same topic ad nauseam, you’re right, but every month or so I get an external trigger that pushes me back to the same discussion, this time an interesting comment thread on Massimo Re Ferre’s blog.
Virtual Networks: the Skype Analogy
I usually use the “Nicira is Skype of virtual networking” analogy when describing the differences between Nicira’s NVP and traditional VLAN-based implementations. Cade Metz liked it so much he used it in his What Is a Virtual Network? It’s Not What You Think It Is article, so I guess a blog post is long overdue.
Before going into more details, you might want to browse through my Cloud Networking Scalability presentation (or watch its recording) – the crucial slide is this one:
Brocade VCS Fabric
Just prior to Networking Field Day, the merry band of geeks sat down with Chip Copper, Brocade’s Solutioneer (a job title almost as good as Packet Herder) to discuss the intricate details of VCS Fabric. The videos are well worth watching – the technical details are interesting, but above all, Chip is a fantastic storyteller.
Virtual Networking is more than VMs and VLAN duct tape
VMware has a fantastic-looking cloud provisioning tool – vCloud director. It allows cloud tenants to deploy their VMs and create new virtual networks with a click of a mouse (the underlying network has to provide a range of VLANs, or you could use VXLAN or vCDNI to implement the virtual segments).
Needless to say, when engineers not familiar with the networking intricacies create point-and-click application stacks without firewalls and load balancers, you get some interesting designs.
LineRate Proxy: Software L4-7 Appliance With a Twist
Buying a new networking appliance (be it VPN concentrator, firewall or load balancer … aka Application Delivery Controller) is a royal pain. You never know how much performance you’ll need in two or three years (and your favorite bean counter will not allow you to scrap it in less than 4-5 years). You do know you’ll never get the performance promised in vendor’s data sheets … but you don’t always know which combination of features will kill the box.
Now, imagine someone offers you a performance guarantee – you’ll always get what you paid for. That’s what LineRate Systems, a startup just exiting stealth mode is promising.
vCider: A Hammer Looking For a Nail?
Last week Juergen Brendel published an interesting blog post describing how you can use vCider to implement high-availability clusters with multi cloud strategy, triggering the following response from one of my readers: “I hadn't heard of vCider before but seeing stuff like this always makes me doubt my sanity – is there really a situation where the only solution is multi-site L2?”
Networking Tech Field Day #3: First Impressions
Last week Stephen Foskett and Greg Ferro brought back their merry crew of geeks (and a network security princess) for the third Networking Tech Field Day. We’ve met some exciting new vendors (Infineta and Spirent) and a few long-time friends (Arista, Cisco, NEC and Solarwinds).
Infineta gave us a fantastic deep-dive into deduplication math, and Spirent blew our socks off with their testing gear. As for the generic state of the networking industry, William R. Koss nicely summarized my feelings in a blog post published last Friday:
Cisco & VMware: Merging the Virtual and Physical NICs
Virtual (soft) switches present in almost every hypervisor significantly reduce the performance of high-bandwidth virtual machines (measurements done by Cisco a while ago indicate you could get up to 38% more throughput if you tie VMs directly to hardware NICs), but as I argued in my “Soft Switching Might Not Scale, But We Need It” post, we need hypervisor switches to isolate the virtual machines from the vagaries of the physical NICs.
Engineering gurus from Cisco and VMware have yet again proven me wrong – you can combine VMDirectPath and vMotion if you use VM-FEX.
Stretched Layer-2 Subnets – The Server Engineer Perspective
A long while ago I got a very interesting e-mail from Dmitriy Samovskiy, the author of VPN-Cubed, in which he politely asked me why the networking engineers find the stretched layer-2 subnets so important. As we might get lucky and spot a few unicorns merrily dancing over stretched layer-2 rainbows while attending the Networking Tech Field Day, I decided share the e-mail contents with you (obviously after getting an OK from Dmitriy).