During the February Packet Party someone asked the evergreen question: “Is it safe to run Internet services in a VRF?” and my off-the-cuff answer was (as always) “Doing that will definitely consume more memory than having the Internet routes in the global routing table.” After a few moments Derick Winkworth looked into one of his routers and confirmed the difference is huge ... but then he has a very special setup, so I decided to do a somewhat controlled test.
Straight from RFC 6670 (section 3.4):
[...] as is usually the case with communications technologies, simplification in one element of the system introduces an increase (possibly a non-linear one) in complexity elsewhere. This creates the "squashed sausage" effect, where reduction in complexity at one place leads to significant increase in complexity at a remote location.
This is probably the most concise description of the great idea of using long-distance vMotion for “mission-critical” craplications, and applies equally well to the kludges used to compensate the simplicity of virtual switches.
One of the posts I wrote in June 2009 described a concept that matters only if you’re studying for the CCIE exam: EIGRP load and reliability metrics. Other more earthly topics included ADSL reference diagram and ADSL QoS basics.
Almost a year ago, I predicted that eventually the hypervisor vendors will wake up and realize it’s time to get rid of VLANs and decouple virtual networks from the physical world. We’ve got the first glimpse of the brave new world a few weeks after that post was published with the VXLAN launch, but that was still a Cisco’s solution running on top of VMware’s (and now everyone else’s) hypervisor. The recent VMware’s acquisition of Nicira proves that VMware finally woke up big time.
I won’t spend any time on the “perfectly fine” part (Greg Ferro had a lot to say about that in the early Packet Pushers podcasts), but focus on the fundamental difference between the two: the use case.
Whenever I’m explaining MPLS/VPN technology, I recommend using the same route targets (RT) and route distinguishers (RD) in all VRFs belonging to the same simple VPN. The Single RD per VPN recommendation doesn’t work well for multi-homed sites, so one might wonder whether it would be better to use a different RD in every VRF. The RD-per-VRF design also works, but results in significantly increased memory usage on PE-routers.
2012-07-24: Updated the conclusions based on feedback from nosx
In a recent blog post, Chuck Hollis described how some of EMC customers use long-distance workload mobility. Not surprisingly, he focused on the VPLEX Metro part of the solution and didn’t even mention the earth-flattening requirements this idea imposes on the network. I guess you already know my views on that topic, but regardless of my personal opinions, he got me curious.
One of German ISPs is actually quite busy rolling out IPv6 after their CFO got a call from a stock analyst right during the RIPE meeting, asking questions “so what are your IPv6 plans?” – “none, what is IPv6?” – “oh, this is not so good”… full panic down the management chain…
Proves the everlasting wisdom from Martin Levy (source, the rest of article is not worth reading):
You can either do a planned, careful migration, or you can do it in a panic. And you should know full well that panicking is more expensive.
I get this question every second week or so – someone would like to buy the yearly subscription and wonders whether she’ll be able to watch the recordings on her iPad.
Short answer: Yes for most webinars.
Update 2012-07-13: All webinars recorded prior to July 1st 2012 are available in ARF format. Many of them are also available in edited MP4 format.
My tweet about the latest proof of my layer-2 = single failure domain claim has raised numerous questions about the use of bridging (aka switching) within Internet Exchange Points (IXP). Let’s see why most IXPs use L2 switching and why L2 switching is the simplest solution to the problem they’re solving.
David Le Goff sent me several great SDN-related questions. Here’s the first one:
What is your take on the performance issue with software-based equipment when dealing with general purpose CPU only? Do you see this challenge as a hard stop to SDN business?
Short answer (as always) is it depends. However, I think most people approach this issue the wrong way.
This post is probably a bit premature, but I’m positive your CIO will get a visit from a vendor offering clean-slate OpenFlow/SDN-based data center fabrics in not so distant future. At that moment, one of the first questions you should ask is “how well does your new wonderland integrate with my existing network?” or more specifically “which L2 and L3 protocols do you support?”
A friendly offer to write a guest blog post for my blog just landed in my Inbox. It's amazing what some people think networking engineers really need.
A lot of engineers are concerned with what seems to be frivolous creation of new encapsulation formats supporting virtual networks. While STT makes technical sense (it allows soft switches to use existing NIC TCP offload functionality), it’s harder to figure out the benefits of VXLAN and NVGRE. Scott Lowe wrote a great blog post recently where he asked a very valid question: “Couldn’t we use MPLS over GRE or IP?” We could, but we wouldn’t gain anything by doing that.