You’ve Been Doing the Same Thing for the Last 20 Years
When we were discussing my autumn travel plans, my lovely wife asked me “What are you going to talk about in Bern?” She has a technical background, but I didn’t feel like going into the intricacies of SDN, SDDC and NetOps, so I told her the essence of my keynote speech:
… updated on Tuesday, February 28, 2023 17:46 UTC
The Four Paths to SDN
After the initial onslaught of SDN washing, four distinct approaches to SDN have started to emerge, from centralized control plane architectures to smart reuse of existing protocols.
As always, each approach has its benefits and drawbacks, and there’s no universally best solution. You just got four more (somewhat immature) tools in your toolbox. And now for the details.
SIGS & Carrier’s Lunch DC Day: An Event Definitely worth Visiting
I spent last Tuesday in Bern attending the SIGS DC Day Event, and came back home extremely pleasantly surprised. The conference was nice and cozy, giving everyone plenty of opportunities to chat about data center technical challenges (thanks for all the wonderful conversations we had – you know who you are!).
Having the opportunity to meet fellow networking engineers and compare notes is great, but it’s even better to combine that with new knowledge, and that’s where the event really excelled.
Tech Talks: The Essence of MPLS
Seamus Gilchrist sent me a fantastic list of MPLS- and MPLS-TE-related questions. Instead of starting an email exchange we agreed on something that should benefit a wider community: a lengthy whiteboard session discussing the basics of MPLS, MPLS-TE, load balancing and QoS in MPLS networks…
The first part of our conversation is already online: The Essence of MPLS.
Open-Source Hybrid Cloud Reference Architecture on Software Gone Wild
A while ago Rick Parker told me about his amazing project: he started a meetup group that will build a reference private/hybrid cloud heavily relying on virtualized network services, and publish all documentation related to their effort, from high-level architecture to device and software configurations, and wiring plans.
In Episode 8 of Software Gone Wild Rick told us more about his project, and we simply couldn’t avoid a long list of topics including:
IPv6 Neighbor Discovery (ND) and Multicast Listener Discovery (MLD) Challenges
A few days ago Garrett Wollman published his exasperating experience running IPv6 on large L2 subnets with Juniper Ex4200 switches, concluding that “… much in IPv6 design and implementation has been botched by protocol designers and vendors …” (some of us would forcefully agree) making IPv6 “…simply unsafe to run on a production network…”
The resulting debate on Hacker News is quite interesting (and Andrew Yourtchenko is trying hard to keep it close to facts) and definitely worth reading… but is ND/MLD really as broken as some people claim it is?
vMotion Enhancements in vSphere 6
VMware announced several vMotion enhancements in vSphere 6, ranging from “finally” to “interesting”.
vMotion across virtual switches. Finally. The tricks you had to use previous were absolutely bizarre.
Controller Cluster Is a Single Failure Domain
Some OpenFlow-focused startups are desperately trying to tell you how redundant their architecture is. Unfortunately all the whitepapers (and the prancing unicorns) cannot change a simple fact: an SDN controller (OpenFlow-based or otherwise) is in some aspects a single failure domain.
Is Anyone Using DMVPN-over-IPv6?
One of my readers sent me an interesting challenge: they’re deploying a new DMVPN WAN, and as they cannot expect all locations to have native (non-NAT) IPv4 access, they plan to build the new DMVPN over IPv6. He was wondering whether it would work.
Apart from “you’re definitely going in the right direction” all I could tell him was “looking at the documentation I couldn’t see why it wouldn’t work” Has anyone deployed DMVPN over IPv6 in a production network? Any hiccups? Please share your experience in the comments. Thank you!
See You in Bern on September 9th
TL;DR: I'll be in Bern on September 9th. If you'd like to drop by and discuss network design or automation challenges, read on…
Scalability Enhancements in Cisco Nexus 1000V
The latest release of Cisco Nexus 1000V for vSphere can handle twice as many vSphere hosts as the previous one (250 instead of 128). Cisco probably did a lot of code polishing to improve Nexus 1000V scalability, but I’m positive most of the improvement comes from interesting architectural changes.
Snabb Switch Deep Dive on Software Gone Wild
The pilot episode of Software Gone Wild podcast featuring Snabb Switch created plenty of additional queries (and thousands of downloads) – it was obviously time for another deep dive episode discussing the intricate innards of this interesting virtual switch.
During the deep dive Luke Gorrie, the mastermind behind the Snabb Switch, answered a long list of questions, including:
Just Published: SDN and OpenFlow – The Hype and the Harsh Reality
If you’re a regular reader of my blog, you know that I spent a lot of time during the last three years debunking SDN myths, explaining the limitations of OpenFlow and pointing out other technologies one could use to program the network.
During the summer of 2014 I organized my SDN- and OpenFlow-related blog posts into a digital book. I want to make this information as useful and as widely distributed as possible – for a limited time you can download the PDF free of charge.
Network Infrastructure as Database
A while ago I wrote about the idea of treating network infrastructure (and all other infrastructure) as code, and using the same processes application developers are using to write, test and deploy code to design and implement networks.
That approach clearly works well if you can virtualize (and clone ad infinitum) everything. We can virtualize appliances or even routers, but installed equipment and high-speed physical infrastructure remain somewhat resistant to that idea. We need a different paradigm, and the best analogy I could come up with is a database.
Is Data Center Trilogy Package the Right Fit to Understand Long Distance vMotion Challenges?
A reader sent me this question:
My company will have 10GE dark fiber across our DCs with possibly OTV as the DCI. The VM team has also expressed interest in DC-to-DC vMotion (<4ms). Based on your blogs it looks like overall you don't recommend long-distance vMotion across DCI. Will the "Data Center trilogy" package be the right fit to help me better understand why?
Unfortunately, long-distance vMotion seems to be a persistent craze that peaks with a predicable period of approximately 12 months, and while it seems nothing can inoculate your peers against it, having technical arguments might help.