Quick question: IP multicast over an existing IP backbone

Imagine you’d actually want to run VXLAN between two data centers (I wouldn’t but that’s beyond the point at this moment) and the only connectivity between the two is IP, no multicast. How would you implement IP multicast across a generic IP backbone? Anything goes, from duct tape (GRE) to creative solutions ... and don’t forget those pesky RPF checks.

12 comments:

  1. Years ago I stuffed a multicast stream into a GRE tunnel with some success. The multicast was a unidirectional IP video stream pulled from a satellite. The streams source IP was an RFC-1918 that I couldn't route on our multicast enabled backbone, so I NAT'd the stream to a routeable addresses using a PIX prior to sending outbound from the site.

    Unfortunately, the source had a TTL of 4, and the broadcast TV guys that uplinked the stream didn't know how to change that, so I stuffed the stream into GRE tunnels before crossing the backbone., just to keep the TTL from decrementing.

    8-)
  2. I've done this sort of thing before too - a lot of the metro eth L2 services have a MAC address limit (say 50) for the port, and many of our futures trading clients commonly have a lot of multicast feeds with many groups, so we commonly stuff multicast traffic into GRE tunnels between the applicable sites in order not to exceed the MAC address limit with the multicast feeds.
  3. OTV sounds like a good fit here.
  4. I've used the "Service Reflection" feature on the Cat6k with some success for this. It appeared in SXI5 but I believe it's now fairly widely implemented on other platforms.

    Andrew.
  5. OTV until just recently 5.2(1) needed native multicast itself so not sure that'll be viable, and it only runs on Nexus 7K and I think ASR 1K so may not be implemented everywhere. (pondering the question though... )....
  6. For a generic IP backbone I would do a generic IP multicast implementation ;)

    Seriously though, I would avoid doing anything creative or special unless something within the IP backbone required it. I've worked on IPTV networks using multicast and the users do not like it when it breaks. Server admins and users of VXLAN would be no different so keep it simple!
  7. GRE doesnt sound right.

    L2TP sounds better
  8. Enable IPmc on the IP backbone? if it is a 3rd party service, like an MPLS VPN, it is probably capable of supporting IPmc with some configuration... even the death star supports PIM-SM BSR and autoRP and PIM-SSM on their AVN MPLS VPN service offering (although you have to specify that it be enabled, and tell them for what multicast groups if not using their standards for PIM-SSM).

    Alternative to that ... GRE? ("I can fix anything with a tunnel")
  9. you haven't really mentioned much about scale... numbers of mcast groups, aggregate bandwidth of the IPmc traffic, etc
  10. Hi,

    There is a protocol, which is intended to carry multicast over not multicast enabled networks). It's a AMT (automatic multicast without explicit tunnels.)
    Juniper might have implementation for this.

    Csilla
  11. Thanks for a pointer to another interesting technology. The way I understand it, you need a host-side part as well, which is not an option in this case (there are other more important bits and pieces missing in ESX/vSphere ;) )
  12. how to multicast the ip address in logical router
Add comment
Sidebar