Virtual Networking is more than VMs and VLAN duct tape

VMware has a fantastic-looking cloud provisioning tool – vCloud director. It allows cloud tenants to deploy their VMs and create new virtual networks with a click of a mouse (the underlying network has to provide a range of VLANs, or you could use VXLAN or vCDNI to implement the virtual segments).

Needless to say, when engineers not familiar with the networking intricacies create point-and-click application stacks without firewalls and load balancers, you get some interesting designs.

read more see 8 comments

Best of February 2012

Google Analytics claims blog posts describing Nicira were among the most popular content written in February 2011. No surprise there. Here’s the whole top-10 list:

see 2 comments

LineRate Proxy: Software L4-7 Appliance With a Twist

Buying a new networking appliance (be it VPN concentrator, firewall or load balancer … aka Application Delivery Controller) is a royal pain. You never know how much performance you’ll need in two or three years (and your favorite bean counter will not allow you to scrap it in less than 4-5 years). You do know you’ll never get the performance promised in vendor’s data sheets … but you don’t always know which combination of features will kill the box.

Now, imagine someone offers you a performance guarantee – you’ll always get what you paid for. That’s what LineRate Systems, a startup just exiting stealth mode is promising.

read more see 17 comments

Full Mesh Is the Worst Possible Fabric Architecture

One of the answers you get from some of the vendors selling you data center fabrics is “you can use any topology you wish” and then they start to rattle off an impressive list of buzzword-bingo-winning terms like full mesh, hypercube and Clos fabric. While full mesh sounds like a great idea (after all, what could possibly go wrong if every switch can talk directly to any other switch), it’s actually the worst possible architecture (apart from the fully randomized Monkey Design).

Before reading the rest of this post, you might want to visit Derick Winkworth’s The Sad State of Data Center Networking to get in the proper mood.
read more see 12 comments

vCider: A Hammer Looking For a Nail?

Last week Juergen Brendel published an interesting blog post describing how you can use vCider to implement high-availability clusters with multi cloud strategy, triggering the following response from one of my readers: “I hadn't heard of vCider before but seeing stuff like this always makes me doubt my sanity – is there really a situation where the only solution is multi-site L2?

read more see 4 comments

Monkey Design Still Doesn’t Work Well

We’ve seen several interesting data center fabric solutions during the Networking Tech Field Day presentations, every time hearing how the new fabric technologies (actually, the shortest path bridging part of those technologies) allow us to shed the yoke of the Spanning Tree monster (see Understanding Switch Fabrics by Brandon Carroll for more details). Not surprisingly we wanted to know more and asked the obvious question: “and how would you connect the switches within the fabric?”

read more see 13 comments

Networking Tech Field Day #3: First Impressions

Last week Stephen Foskett and Greg Ferro brought back their merry crew of geeks (and a network security princess) for the third Networking Tech Field Day. We’ve met some exciting new vendors (Infineta and Spirent) and a few long-time friends (Arista, Cisco, NEC and Solarwinds).

Infineta gave us a fantastic deep-dive into deduplication math, and Spirent blew our socks off with their testing gear. As for the generic state of the networking industry, William R. Koss nicely summarized my feelings in a blog post published last Friday:

read more see 1 comments

IPv6 Legends and Myths: More Opinions than Data Points

Trevor Pott wrote an interesting article in The Register (linking to my IPv6 multihoming post – thank you!) explaining how, in his opinion, IPv6 sucks for small and medium businesses. I wholeheartedly agree with some of his conclusions (actually, agreed with them for the last three years), but unfortunately the article contains several factual errors that simply have to be corrected (I doubt many of Trevor’s readers will actually find their way to this article, but one can always hope).

read more see 17 comments

Cisco & VMware: Merging the Virtual and Physical NICs

Virtual (soft) switches present in almost every hypervisor significantly reduce the performance of high-bandwidth virtual machines (measurements done by Cisco a while ago indicate you could get up to 38% more throughput if you tie VMs directly to hardware NICs), but as I argued in my “Soft Switching Might Not Scale, But We Need It” post, we need hypervisor switches to isolate the virtual machines from the vagaries of the physical NICs.

Engineering gurus from Cisco and VMware have yet again proven me wrong – you can combine VMDirectPath and vMotion if you use VM-FEX.

read more see 13 comments

Migrating from Phase 1 DMVPN to Phase 2/3 Network

Chris sent me an interesting question that I haven’t covered in any of my DMVPN webinars: “How would you migrate a part of a Phase-1 DMVPN network to a Phase-2 or Phase-3 network if you can only migrate one spoke site at a time? Can I just upgrade the spokes that need spoke-to-spoke connectivity?”

While it might be theoretically possible to have a mixed Phase-1/Phase-2 DMVPN tunnel (and I just might be able to get it to work in a lab), such a solution definitely violates the KISS principle.

read more add comment

Stretched Layer-2 Subnets – The Server Engineer Perspective

A long while ago I got a very interesting e-mail from Dmitriy Samovskiy, the author of VPN-Cubed, in which he politely asked me why the networking engineers find the stretched layer-2 subnets so important. As we might get lucky and spot a few unicorns merrily dancing over stretched layer-2 rainbows while attending the Networking Tech Field Day, I decided share the e-mail contents with you (obviously after getting an OK from Dmitriy).

read more see 6 comments

Looking into Data Center Storage Protocols mysteries

Should you use FC, FCoE or iSCSI when deploying new gear in your existing data center? What about Greenfield deployments? What are the decision criteria? Should you just skip iSCSI and use NFS if you’re focusing on server virtualization with VMware? Does it still make sense to build separate iSCSI network? Are jumbo frames useful? We’ll try to answer all these questions and a few more in the first Data Center Virtual Symposium sponsored by Cisco Systems.

read more see 1 comments
Sidebar