Category: Design
Temper Your MacGyver Streak
Microseconds after VXLAN was launched at VMworld 2011, someone started promoting it as a data center extension solution. Even though layer-2 DCI doesn’t make much sense (even to server people) and VXLAN is really not a DCI solution, the lure of misusing a technology was irresistible.
Virtual Networking is more than VMs and VLAN duct tape
VMware has a fantastic-looking cloud provisioning tool – vCloud director. It allows cloud tenants to deploy their VMs and create new virtual networks with a click of a mouse (the underlying network has to provide a range of VLANs, or you could use VXLAN or vCDNI to implement the virtual segments).
Needless to say, when engineers not familiar with the networking intricacies create point-and-click application stacks without firewalls and load balancers, you get some interesting designs.
Full Mesh Is the Worst Possible Fabric Architecture
One of the answers you get from some of the vendors selling you data center fabrics is “you can use any topology you wish” and then they start to rattle off an impressive list of buzzword-bingo-winning terms like full mesh, hypercube and Clos fabric. While full mesh sounds like a great idea (after all, what could possibly go wrong if every switch can talk directly to any other switch), it’s actually the worst possible architecture (apart from the fully randomized Monkey Design).
vCider: A Hammer Looking For a Nail?
Last week Juergen Brendel published an interesting blog post describing how you can use vCider to implement high-availability clusters with multi cloud strategy, triggering the following response from one of my readers: “I hadn't heard of vCider before but seeing stuff like this always makes me doubt my sanity – is there really a situation where the only solution is multi-site L2?”
Monkey Design Still Doesn’t Work Well
We’ve seen several interesting data center fabric solutions during the Networking Tech Field Day presentations, every time hearing how the new fabric technologies (actually, the shortest path bridging part of those technologies) allow us to shed the yoke of the Spanning Tree monster (see Understanding Switch Fabrics by Brandon Carroll for more details). Not surprisingly we wanted to know more and asked the obvious question: “and how would you connect the switches within the fabric?”
Migrating from Phase 1 DMVPN to Phase 2/3 Network
Chris sent me an interesting question that I haven’t covered in any of my DMVPN webinars: “How would you migrate a part of a Phase-1 DMVPN network to a Phase-2 or Phase-3 network if you can only migrate one spoke site at a time? Can I just upgrade the spokes that need spoke-to-spoke connectivity?”
While it might be theoretically possible to have a mixed Phase-1/Phase-2 DMVPN tunnel (and I just might be able to get it to work in a lab), such a solution definitely violates the KISS principle.
Redundant DMVPN Designs, Part 2 (Multiple Uplinks)
In the Redundant DMVPN Design, Part 1 I described the options you have when you want to connect non-redundant spokes to more than one hub. In this article, we’ll go a step further and design hub and spoke sites with multiple uplinks.
Public IP Addressing
Fact: DMVPN tunnel endpoints have to use public IP addresses or the hub/spoke routers wouldn’t be able to send GRE/IPsec packets across the public backbone.
Should I Use 6PE or Native IPv6 Transport?
One of my students was watching the Building IPv6 Service Provider Core webinar and wondered whether he should use 6PE or native IPv6 transport:
Could you explain further why it is better to choose 6PE over running IPv6 in the core? I have to implement IPv6 where I work (a small ISP) and need to fully understand why I should choose a certain implementation.
Here’s a short decision tree that should help you make that decision:
OpenFlow Deployment Models
I hope you never believed the “OpenFlow networking nirvana” hype in which smart open-source programmable controllers control dumb low-cost switches, busting the “networking = mainframes” model and bringing the Linux-like golden age to every network. As the debates during the OpenFlow symposium clearly illustrated, the OpenFlow reality is way more complex than it appears at a first glance.
To make it even more interesting, at least four different models for OpenFlow deployment have already emerged:
Generic VLAN Design
Like every other blogger, I get occasional e-mails from people fishing for free consulting or second opinion (note: asking a serious technical question is a totally different story; as many people know, I always try to reply and help) and as I’m totally overloaded with OpenFlow symposium and Net Field Day these days, I decided to share one of the better ones.