Category: WAN
Revisited: Layer-2 DCI over VXLAN
I’m still getting questions about layer-2 data center interconnect; it seems this particular bad idea isn’t going away any time soon. In the face of that sad reality, let’s revisit what I wrote about layer-2 DCI over VXLAN.
VXLAN hasn’t changed much since the time I explained why it’s not the right technology for long-distance VLANs.
Going All Virtual with Virtual WAN Edge Routers
If you’re building a Greenfield private cloud, you SHOULD consider using virtual network services appliances (firewalls, load balancers, IPS/IDS systems), removing the need for additional hard-to-scale hardware devices. But can we go a step further? Can we replace all networking hardware with x86 servers and virtual appliances?
Layer-2 Extension (OTV) Use Cases
I was listening to the fantastic OTV Deep Dive PQ Packet Pushers podcast while biking around the wonderful Slovenian forests. They started the podcast by discussing OTV use cases, Ethan throwing in long-distance vMotion (the usual long-distance L2 extension selling point), but refreshingly some of the engineers said “well, that’s not really the use case we see in real life.”
So what were the use cases they were mentioning?
Layer-2 DCI with Enterasys Switches
The second half of the Enterasys DCI Solutions webinar focused on real-life case studies. First the less interesting one: long-distance live VM migration (you know my feelings about the whole concept, but sometimes you just have to do it) and the role of fabric routing and host routing in the process.
Sooner or Later, Someone Will Pay for the Complexity of the Kludges You Use
I loved listening to OTV/FabricPath/LISP Packet Pushers podcast. Ron Fuller and Russ White did a great job explaining the role of OTV, FabricPath and LISP in a stretched (inter-DC) subnet deployment scenario and how the three pieces fit together … but I couldn't stop wondering whether there is a better method to solve the underlying business need than throwing three new pretty complex technologies and associated equipment (or VDC contexts or line cards) into the mix.
Extending Layer-2 Connection into a Cloud
Carlos Asensio was facing an “interesting” challenge: someone has sold a layer-2 extension into their public cloud to one of the customers. Being a good engineer, he wanted to limit the damage the customer could do to the cloud infrastructure and thus immediately rejected the idea to connect the customer straight into the layer-2 network core ... but what could he do?
Downloadable Recording of Enterasys Data Center Interconnect Solutions Webinar
The recording of recent Enterasys Robust Data Center Solutions is available on ipSpace.net demo web site.
You can watch (or download) the following videos:
Is Layer-3 DCI Safe?
One of my readers sent me a great question:
I agree with you that L2 DCI is like driving without a seat belt. But is L3 DCI safer in case of DCI link failure? Let's say you have your own AS and PI addresses in use. Your AS spans multiple sites and there are external BGP peers on each site. What happens if the L3 DCI breaks? How will that impact your services?
Simple answer: while L3 DCI is orders of magnitude safer than L2 DCI, it will eventually fail, and you have to plan for that.
Layer-2 DCI and the infinite wisdom of acmqueue
Yesterday I got pulled into a layer-2 DCI tweetfest. Not surprisingly, there were profound opinions all over the place, including “We've been doing it (OTV) for almost a year now. No problems.”
OTV is in fact the least horrible option – it does quite a few things right, including tight control of unicast flooding and reduction of STP scope.
Today I stumbled across this gem in the acmqueue blogs:
You might as well ask why people insist on not wearing seatbelts after all of the years that particular technology has been proven to save lives.
People will, it seems, persist in the optimistic belief that everything will be OK so long as they are otherwise careful. They think that bad things happen only to other people’s protocols, or packets, but not to theirs. Hope springs eternal and dies in the cold, cold winter of experience.
Finding this one a day after discussing layer-2 DCI? There really are no coincidences.
The Difference between Metro Ethernet and Stretched Data Center Subnets
Every time I rant about large-scale bridging and stretched L2 subnets, someone inevitably points out that Carrier (or Metro) Ethernet works perfectly fine using the same technologies and principles.
I won’t spend any time on the “perfectly fine” part, but focus on the fundamental difference between the two: the use case.
OpenFlow @ Google: Brilliant, but not revolutionary
Google unveiled some details of its new internal network at Open Networking Summit in April and predictably the industry press and OpenFlow pundits exploded with the “this is the end of the networking as we know it” glee. Unfortunately I haven’t seen a single serious technical analysis of what it is they’re actually doing and how different their new network is from what we have today.
RFC Tidbit: IPv6 in 3GPP mobile networks
Did you ever want to have a high-level overview of how 3G/4G mobile networks work? Where GGSN and SGSN fit in? What the PDP contexts are ... and why you need two for dual-stack connectivity? All that (and a lot more) is explained in very well written IETF draft IPv6 in 3GPP Evolved Packet System. Reading highly recommended.
Busting Layer-2 Data Center Interconnect Myths
A few weeks ago I delivered a short L2 DCI WebEx presentation to CCIE Club Poland. I took the L2 part of my Data Center Interconnect webinar and added 15 minutes of L2 DCI mythbusting.
That part of my presentation is on YouTube; for the rest, watch my Data Center Interconnect webinar.
MPLS is not tunneling
Greg (@etherealmind) Ferro started an interesting discussion on Google+, claiming MPLS is just tunneling and a duct tape like NAT. I would be the first one to admit MPLS has its complexities (not many ;) and shortcomings (a few ;), but calling it a tunnel just confuses the innocents. MPLS is not tunneling, it’s a virtual-circuits-based technology, and the difference between the two is a major one.
Long-distance IRF Fabric: Works Best in PowerPoint
HP has commissioned an IRF network test that came to absolutely astonishing conclusions: vMotion runs almost twice as fast across two links bundled in a port channel than across a single link (with the other one being blocked by STP). The test report contains one other gem, this one a result of incredible creativity of HP marketing:
For disaster recovery, switches within an IRF domain can be deployed across multiple data centers. According to HP, a single IRF domain can link switches up to 70 kilometers (43.5 miles) apart.
You know my opinions about stretched cluster… and the more down-to-earth part of HP Networking (the people writing the documentation) agrees with me.