Layer-2 Extension (OTV) Use Cases

I was listening to the fantastic OTV Deep Dive PQ Packet Pushers podcast while biking around the wonderful Slovenian forests. They started the podcast by discussing OTV use cases, Ethan throwing in long-distance vMotion (the usual long-distance L2 extension selling point), but refreshingly some of the engineers said “well, that’s not really the use case we see in real life.”

So what were the use cases they were mentioning?

I loved one of them – someone using OTV to get away from L2 interconnect. They had a traditional L2 interconnect (and all the associated “goodies”), decided to convert it to L3 interconnect, but still needed some stretched VLANs in the migration period.

And here are the other use cases I gleaned from the podcast:

External BGP subnets – you have a single /24 IPv4 prefix that you have to announce from more than one data center, and most people would immediately think about stretching that same subnet (because you can’t advertise two /25s to the Internet) across more than one location hoping that everything works.

Not surprisingly, if the inter-DC WAN link fails, you’ll face a nice split-brain scenario with both data centers advertising the subnet, effectively preventing some users from reaching the correct data center … unless you do some fancy routing, which brings me to the point: you don’t need stretched layer-2 subnet to implement this scenario, you just need proper design and some more intelligent routing.

Now, I totally understand that some customers love to sprinkle another layer of pixie dust over their network instead of investing in proper BGP design and deployment. As a system integrator you usually have to go with what your customers want (and are willing to pay for), but the L2 extension still carries a hefty price tag (particularly if you have to buy the M2 linecards and OTV license for Nexus 7000) which might be a bit higher than attending a BGP course and paying someone to design your DC WAN edge (or review your design).

A totally irrelevant side remark: you do know that you can get more than enough IPv6 address space to cover all the data centers you might have anywhere in the solar system, don’t you?

Data Center migration, which is a perfect use case that even I would support. Do keep in mind that you have to sync a lot of things (including storage), which could make the migration project a bit more complex than a simple shutdown-move-powerup procedure, but if you have to move the data center and cannot agree on a reasonably long maintenance window within the next 6 months, you just might have to use long-distance vMotion hoping nothing crashes in the process.

Also, keep in mind that your migration might not be as fast as you expect it to be – some people managed to move 30 VMs in a weekend, which was such a phenomenal achievement that EMC simply had to document it in a press release.

Finally, don’t forget to turn off layer-2 extension when you’re done – you wouldn’t want to turn two data centers into a single failure domain, would you?

Disaster recovery with SRM – yet another use case supporting laziness at the cost of network complexity. I totally understand that you have to use the same subnet in both data centers because some craplications simply cannot survive a changed IP address, but I can’t grasp why you wouldn’t use SRM external hooks and reconfigure the switches with NETCONF (or XMPP or Puppet) during the SRM recovery process to recreate the subnet in the other data center.

BTW, if you’re running anything more complex than an SMB web hosting environment, you probably have to migrate firewall and load balancer configurations as well, in which case recreating the lost subnet is the least of your worries … unless you already deployed virtual appliances.

Summary – I’m still looking for a good layer-2 extension use case (apart from the migration ones).

More Information

You’ll find all you never wanted to know about Data Center interconnects (layer-2 and layer-3, including MPLS/VPN) in the DCI webinar.

Latest blog posts in Disaster Recovery series

7 comments:

  1. I know you're not a fan, but we've been running split DMZs using OTV for some time now. It's a unique design and not for everyone, but it has performed flawlessly so far, even during a multicast disruption causing an OTV failure. Not saying it is in anyway a preferred way of designing DMZs, but as tool, OTV has use cases, performs well, and is fairly easy to troubleshoot. Like anything else, it's a matter of identify risks, mitigating what you can, and determining the overall benefits-versus-risks impact to the organization.
  2. Santino, are saying an overlay is easy to troubleshoot? :)
  3. Yes, an OTV overlay is fairly easy to troubleshoot. I am in no way implying the applications within the overlay, as they traverse the L3 transport are easy to troubleshoot. :)
  4. I am one of the lazy engineers who implemented OTV for SRM. We had three excuses.

    1) Support for partial failover as most failures are not complete site failures. (Yes, we considered the issue of applications being split between data centers while running in this state).

    2) Virtual appliances where SRM IP customization does not work. (These same VMs often have no APIs and cannot be scripted).

    3) Broken VMs where IP customization failed (yes, fixing those VMs would have been better).
  5. I've got pretty much the same requirements as Ryan, though I'm not sure I'd call them excuses. #1 on the wish-list is the ability to do a partial site fail-over in (for a use case like an array failure)
  6. question can I create OTV using.... 5k Nexus switches interconnected to ASR 1000? Or OTV is only supported on 7k switches? I am in the same shoes situation where we have SRM and i am trying to figure out what is best here, having 5k Neuxus core switches with ASR routers interconnected to 5k Nexus switches for OTV or simply 7k and if it is 7k then i would like to have FEX functionality but for that I will need F2 module, I am just not sure if i can have F2E module then down the road buy M2 module and achieve OTV. I know it all has to be on different VDC etc... but i guess my main question here is can i use OTV with 5k Nexus switches as oppose to getting 7k?
    Replies
    1. OTV works on Nexus 7000 and ASRs. For hardware specific issues (and I know they're still a royal pain) you'd have to check cisco's web site.
Add comment
Sidebar