MPLS is not tunneling
Greg (@etherealmind) Ferro started an interesting discussion on Google+, claiming MPLS is just tunneling and a duct tape like NAT. I would be the first one to admit MPLS has its complexities (not many ;) and shortcomings (a few ;), but calling it a tunnel just confuses the innocents. MPLS is not tunneling, it’s a virtual-circuits-based technology, and the difference between the two is a major one.
You can talk about tunneling when a protocol that should be lower in the protocol stack gets encapsulated in a protocol that you’d usually find above or next to it. MAC-in-IP, IPv6-in-IPv4, IP-over-GRE-over-IP, MAC-over-VPLS-over-MPLS-over-GRE-over-IPsec-over-IP ... these are tunnels. IP-over-MPLS-over-PPP/Ethernet is not tunneling, just like IP-over-LLC1-over-TokenRing or IP-over-X.25-over-LAPD wasn’t.
It is true, however, that MPLS uses virtual circuits, but they are not identical to tunnels. Just because all packets between two endpoints follow the same path and the switches in the middle don’t inspect their IP headers, doesn’t mean you use a tunneling technology.
One-label MPLS is (almost) functionally equivalent to two well-known virtual circuit technologies: ATM or Frame Relay (that was also its first use case). However, MPLS-based networks scale better than those using ATM or Frame Relay because of two major improvements:
Automatic setup of virtual circuits based on network topology (core IP routing information), both between the core switches and between the core (P-routers) and edge (PE-routers) devices. Unless configured otherwise, IP routing protocol performs topology autodiscovery and LDP establishes a full mesh of virtual circuits across the core.
VC merge: Virtual circuits from multiple ingress points to the same egress point can merge within the network. VC merge significantly reduces the overall number of VCs (and the amount of state the core switches have to keep) in fully meshed networks.
It’s interesting to note that ITU wants to cripple MPLS to the point of being equivalent to ATM/Frame Relay. MPLS-TP introduces out-of-band management network and management plane-based virtual circuit establishment.
Does it matter?
It might seem like I’m splitting hair just for the fun of it, but there’s a significant scalability difference between virtual circuits and tunnels: devices using tunnels appear as hosts to the underlying network and require no in-network state, while solutions using virtual circuits (including MPLS) require per-VC state entries (MPLS: inbound-to-outbound label mapping in LFIB) on every forwarding device in the path. Even more, end-to-end virtual circuits (like MPLS TE) require state maintenance (provided by periodic RSVP signaling in MPLS TE) involving every single switch in the VC path.
You can find scalability differences even within the MPLS world: MPLS/VPN-over-mGRE (tunneling) scales better than pure label-based MPLS/VPN (virtual circuits) because MPLS/VPN-over-mGRE relies on IP transport and not on end-to-end LSPs between PE-routers. You can summarize loopback addresses if you use MPLS/VPN-over-mGRE; doing the same in end-to-end-LSP-based MPLS/VPN networks breaks them. L2TPv3 scales better than AToM for the same reason.
All VC-based solutions require a signaling protocol between the end devices and the core switches (or an out-of-band layer-8+ communication and management-plane provisioning). Two common protocols used in MPLS networks are LDP (for IP routing-based MPLS) and RSVP (for traffic engineering). Secure and scalable inter-domain signaling protocols are rare; VC-based solutions are thus usually limited to a single management domain (state explosion is another problem that limits the size of a VC-based network).
The only global networks using on-demand virtual circuits were the telephone system and X.25; one of them already died because of its high per-bit costs, and the other one is surviving primarily because we’re replacing virtual circuits (TDM voice calls) with tunnels (VoIP).
Don’t be sloppy with your terminology. There’s a reason we use different terms to indicate different behavior – it helps us understand the implications (ex: scalability) of the technology. For example, it’s important to understand why bridging differs from routing and why it’s wrong to call them both switching, and it helps if you understand that Fibre Channel actually uses routing (hidden deep inside switching terminology).
Ah, and last but definitely not least: if your service provider can’t get his act together, it’s not always the technology’s fault.
IP tunneling (which can also be used as virtual circuits, nobody's implemented it though) "doesn't require per-VC state tables".
Scalability on a per-device basis: good, for this should be very summarized (for comparison, "good" means "potentially optimal", but this is fucked up in the current internet due to lack of hierarchical addressing)
Scalability for the whole network: bad, as the bandwidth between any 2 points is strictly limited by the maximum bandwidth of the shortest path
MPLS tunneling (or ATM): requires per-device state tables, allows per-circuit path based routing
Scalability on a per-device basis: bad, those tables have to get there, have to be maintained (for comparison, "bad" means "heaps better than current internet, but not optimal")
Scalability on the whole network : VERY good, allows TE
Concrete example network is A-[10G]->B-[10G]->C-[10G]->A, and you transmit data between A and B
IP tunnels give you max 10G (and this potentially has to be shared with other flows)
MPLS-TE gives you 20G
This may seem trivial, and for small networks you can simply make sure any core link's bandwidth exceeds maximum total bandwidth (e.g. 20G port-channels in core, 100 mbit uplinks to < 200 devices), but this is not possible for any reasonably sized network.
If you have a network that does multiple terabits per second traffic, IP tunnels are right out.
Interestingly enough, it is possible to implement similar granular traffic engineering with packet switching only, and get same blow up in shared network state. This could be left as an exercise to the readers :) Packet switching does keep network state in the core, just less granular as compared to virtual circuits (and there are serious problems in scale-free topologies). Now coming back to circuit switching, where the amount of network state is fixed in hardware - how comes we are stuck with expensive, poorly scalable packet switches in the network cores ;)
vlan "tags" tunneling?
mac in mac etc
is a dlci mapping a tunnel?
that little L2.5 wedge does apply itself to a virtual "SVC" for the underling FEC can be changed and the dynamic process of the L3 routing protocol to LDP also plays a role in determining its state path due to outside conditions. I would believe that the term tunnel would be used loosely here.
The classical defacto term of tunnel in the IP world was GRE type or IP packet in IP(regardless if encrypted or not) packet meaning the inner packet had no clue or was ever exposed to the outside and the de encapsulated packets at endpoints had no idea of the amount of "hops" traversed. So to an endpoint the trace route is 2 hops yet it really traversed 2+. But in that little wedge L2.5 world that IP packet is fully exposed and even though the upper label is processed there is still a chance for a "punt" and the packet is still exposed. Yet we are using labels as "virtual tunnels" etc. The trace route is still 2 hops to the endpoints through the "cloud" due to MBGP/VRF/VPN mechanics etc) Just Semantics. We can split the nomenclature hairs on protocol function to whatever we want these days with the types of "tunneling" or "encapsulation" features we have available today(as Ivan mentioned). Great post by Ivan always a great read..
So instead of splitting hairs why not split tunneling(oh that was bad) ;)
Yes. LFIB is built based on the routing table.
How about MPLS L3VPN? There is still only a single L3 & L2 header, however it uses an MPLS label stack, MPLS over MPLS as it were.
If an MPLS label stack is a virtual circuit then L2VPN would be tunneling but L3VPN would be a VC.
I'm currently trying to come up with a definition for VPN, hence the interest in the subject area. So far I've managed to deduce that almost everything is VPN!
The original post by Greg on Google + is dead. Is there an archive somewhere?
@Ronald: Nope. Google service killing machinery is way more thorough than what they give us to control our data they're collecting...