Greg (@etherealmind) Ferro started an interesting discussion on Google+, claiming MPLS is just tunneling and a duct tape like NAT. I would be the first one to admit MPLS has its complexities (not many ;) and shortcomings (a few ;), but calling it a tunnel just confuses the innocents. MPLS is not tunneling, it’s a virtual-circuits-based technology, and the difference between the two is a major one.
You can talk about tunneling when a protocol that should be lower in the protocol stack gets encapsulated in a protocol that you’d usually find above or next to it. MAC-in-IP, IPv6-in-IPv4, IP-over-GRE-over-IP, MAC-over-VPLS-over-MPLS-over-GRE-over-IPsec-over-IP ... these are tunnels. IP-over-MPLS-over-PPP/Ethernet is not tunneling, just like IP-over-LLC1-over-TokenRing or IP-over-X.25-over-LAPD wasn’t.
It is true, however, that MPLS uses virtual circuits, but they are not identical to tunnels. Just because all packets between two endpoints follow the same path and the switches in the middle don’t inspect their IP headers, doesn’t mean you use a tunneling technology.
One-label MPLS is (almost) functionally equivalent to two well-known virtual circuit technologies: ATM or Frame Relay (that was also its first use case). However, MPLS-based networks scale better than those using ATM or Frame Relay because of two major improvements:
Automatic setup of virtual circuits based on network topology (core IP routing information), both between the core switches and between the core (P-routers) and edge (PE-routers) devices. Unless configured otherwise, IP routing protocol performs topology autodiscovery and LDP establishes a full mesh of virtual circuits across the core.
VC merge: Virtual circuits from multiple ingress points to the same egress point can merge within the network. VC merge significantly reduces the overall number of VCs (and the amount of state the core switches have to keep) in fully meshed networks.
It’s interesting to note that ITU wants to cripple MPLS to the point of being equivalent to ATM/Frame Relay. MPLS-TP introduces out-of-band management network and management plane-based virtual circuit establishment.
Does it matter?
It might seem like I’m splitting hair just for the fun of it, but there’s a significant scalability difference between virtual circuits and tunnels: devices using tunnels appear as hosts to the underlying network and require no in-network state, while solutions using virtual circuits (including MPLS) require per-VC state entries (MPLS: inbound-to-outbound label mapping in LFIB) on every forwarding device in the path. Even more, end-to-end virtual circuits (like MPLS TE) require state maintenance (provided by periodic RSVP signaling in MPLS TE) involving every single switch in the VC path.
You can find scalability differences even within the MPLS world: MPLS/VPN-over-mGRE (tunneling) scales better than pure label-based MPLS/VPN (virtual circuits) because MPLS/VPN-over-mGRE relies on IP transport and not on end-to-end LSPs between PE-routers. You can summarize loopback addresses if you use MPLS/VPN-over-mGRE; doing the same in end-to-end-LSP-based MPLS/VPN networks breaks them. L2TPv3 scales better than AToM for the same reason.
All VC-based solutions require a signaling protocol between the end devices and the core switches (or an out-of-band layer-8+ communication and management-plane provisioning). Two common protocols used in MPLS networks are LDP (for IP routing-based MPLS) and RSVP (for traffic engineering). Secure and scalable inter-domain signaling protocols are rare; VC-based solutions are thus usually limited to a single management domain (state explosion is another problem that limits the size of a VC-based network).
The only global networks using on-demand virtual circuits were the telephone system and X.25; one of them already died because of its high per-bit costs, and the other one is surviving primarily because we’re replacing virtual circuits (TDM voice calls) with tunnels (VoIP).
Don’t be sloppy with your terminology. There’s a reason we use different terms to indicate different behavior – it helps us understand the implications (ex: scalability) of the technology. For example, it’s important to understand why bridging differs from routing and why it’s wrong to call them both switching, and it helps if you understand that Fibre Channel actually uses routing (hidden deep inside switching terminology).
Ah, and last but definitely not least: if your service provider can’t get his act together, it’s not always the technology’s fault.