Category: MPLS
Edge and Core OpenFlow (and why MPLS is not NAT)
More than a year ago, I explained why end-to-end flow-based forwarding doesn’t scale (and Doug Gourlay did the same using way more colorful language) and what the real-life limitations are. Not surprisingly, the gurus that started the whole OpenFlow movement came to the same conclusions and presented them at the HotSDN conference in August 2012 ... but even that hasn’t stopped some people from evangelizing the second coming.
Secondary MPLS-TE Tunnels and Fast Reroute
Ronald sent me an interesting question: What's the point of having a secondary path set up for a certain LSP, when this LSP also has fast-reroute enabled (for example, with the Junos fast-reroute command)?
The idea of having a pre-established secondary LSP backing up a traffic engineering tunnel was commonly discussed before FRR was widely adopted, but should have quietly faded away by now.
Published on , commented on July 18, 2022
OpenFlow and Ipsilon: Nothing New Under the Sun
I’d promised to record another MPLS-related podcast and wanted to refresh my failing memory and revisit the beginnings of Tag Switching (Cisco’s proprietary technology that was used as the basis for MPLS). Several companies were trying to solve the IP+ATM integration problem in mid-nineties, most of them using IP-based architectures (Cisco, IBM, 3Com), while Ipsilon tried its luck with a flow-based solutions.
Could MPLS-over-IP replace VXLAN or NVGRE?
A lot of engineers are concerned with what seems to be frivolous creation of new encapsulation formats supporting virtual networks. While STT makes technical sense (it allows soft switches to use existing NIC TCP offload functionality), it’s harder to figure out the benefits of VXLAN and NVGRE. Scott Lowe wrote a great blog post recently where he asked a very valid question: “Couldn’t we use MPLS over GRE or IP?” We could, but we wouldn’t gain anything by doing that.
Virtual Circuits in OpenFlow 1.0 World
Two days ago I described how you can use tunneling or labeling to reduce the forwarding state in the network core (which you have to do if you want to have reasonably fast convergence with currently-available OpenFlow-enabled switches). Now let’s see what you can do in the very limited world of OpenFlow 1.0.
Forwarding State Abstraction with Tunneling and Labeling
Yesterday I described how the limited flow setup rates offered by most commercially-available switches force the developers of production-grade OpenFlow controllers to drop the microflow ideas and focus on state abstraction (people living in a dreamland usually go in a totally opposite direction). Before going into OpenFlow-specific details, let’s review the existing forwarding state abstraction technologies.
… updated on Thursday, May 13, 2021 15:42 UTC
BGP-Free Service Provider Core in Pictures
I got a follow-up question to the Should I use 6PE or native IPv6 post:
Am I remembering correctly that if you run IPv6 native throughout the network you need to enable BGP on all routers, even P routers? Why is that?
I wrote about BGP-free core before, but evidently wasn’t clear enough, so I’ll try to fix that error.
Imagine a small ISP with a customer-facing PE-router (A), two PE-routers providing upstream connectivity (B and D), a core router (C), and a route reflector (R). The ISP is running IPv4 and IPv6 natively (no MPLS).
Should I Use 6PE or Native IPv6 Transport?
One of my students was watching the Building IPv6 Service Provider Core webinar and wondered whether he should use 6PE or native IPv6 transport:
Could you explain further why it is better to choose 6PE over running IPv6 in the core? I have to implement IPv6 where I work (a small ISP) and need to fully understand why I should choose a certain implementation.
Here’s a short decision tree that should help you make that decision:
Junos Day One: MPLS Behind The Scenes
When I started making my first wobbling steps into the Junos MPLS world, Dan (@Johansfo) Backman took time to explain the differences between Cisco IOS and Junos MPLS implementations (and some of the reasons they are so different). This is my feeble attempt at describing what I understood he told me.
Junos Interfaces and Protocols: Now I get it
My Junos versus Cisco IOS: Explicit versus Implicit received a huge amount of helpful comments, some of them slightly philosophical, others highly practical – from using interfaces all combined with interface disable in routing protocol configuration, to using configuration groups (more about that fantastic concept in another post).
However, understanding what’s going on is not the same as being able to explain it in one sentence ... and Dan (@jonahsfo) Backman beautifully nailed that one.
Junos versus Cisco IOS: Explicit versus Implicit
My first Junos labbing project was an IPv6 backbone; I wanted to create a simple single-area IS-IS/BGP-free backbone running LDP and MPLS, and using 6PE for IPv6 connectivity. Needless to say, even though I read the excellent Day One books (highly recommended: Exploring IPv6, Advanced IPv6 configuration and Deploying MPLS), I stumbled on almost every step.
Junos Versus Cisco IOS: MPLS and LDP
The comments igp2bgp and Tiziano Tofoni made to my LDP-IGP Synchronization in MPLS Networks post prompted me to look deeper into basic Junos MPLS configuration and LDP behavior. As expected, there are some significant differences between Cisco’s and Juniper’s LDP implementations (and, as is usually the case, they’re both strictly conformant with RFC 5036).
… updated on Saturday, December 26, 2020 08:49 UTC
LDP-IGP Synchronization in MPLS Networks
A reader of my blog planning to migrate his network from a traditional BGP-everywhere design to a BGP-over-MPLS one wondered about potential unexpected consequences. The MTU implications of introducing MPLS in a running network are usually well understood (even though you could get some very interesting behavior); if you can, increase the MTU size by at least 16 bytes (4 labels) and check whether MTU includes L2 header. Another somewhat more mysterious beast is the interaction between IGP and LDP that can cause traffic disruptions after the physical connectivity has been reestablished.
MPLS is not tunneling
Greg (@etherealmind) Ferro started an interesting discussion on Google+, claiming MPLS is just tunneling and a duct tape like NAT. I would be the first one to admit MPLS has its complexities (not many ;) and shortcomings (a few ;), but calling it a tunnel just confuses the innocents. MPLS is not tunneling, it’s a virtual-circuits-based technology, and the difference between the two is a major one.
Quotes of the week
I’ve spent the last few days with a fantastic group of highly skilled networking engineers (can’t share the details, but you know who you are) discussing the topics I like most: BGP, MPLS, MPLS Traffic Engineering and IPv6 in Service Provider environment.
One of the problems we were trying to solve was a clean split of a POP into two sites, retaining redundancy without adding too much extra equipment. The strive for maximum redundancy nudged me to propose the unimaginable: layer-2 interconnect between four tightly controlled routers running BGP, but even that got shot down with a memorable quote from the senior network architect: