Building network automation solutions

9 module online course

Start now!

Response: Hardware Differences between Routers and Switches

Dmytro Shypovalov sent me his views on the hardware differences between routers and switches. Enjoy!

So, a long time ago routers were L3 with CPU forwarding and switches were L2 with ASIC. Then they had invented TCAM and L3 switches, and since then ASICs have evolved to support more features (QoS, encapsulations etc) and store more routes, while CPU-based architectures have evolved to specialised NPU and parallel processing (e.g. Cisco QFX) to handle more traffic, while supporting all features of CPU forwarding.

At some point, the 2 approaches intersect which allows vendors to market a pipeline-based device (with ASIC and TCAM) as a router. A classic example is Cisco rebranding the 6500 switch as the 7600 router. The 6500 with the proper supervisor (3BXL if memory serves me) already could handle BGP full view at the time, also it supported all MPLS features except VPLS. It could even fit some WAN linecards with non-ethernet ports. But it was still marketed as a switch! In the 7600 they added more WAN linecards with metro ethernet, advanced QoS almost like on software routers (shaping, hierarchical policies etc) and VPLS. They did that by offloading that advanced logic to new linecards, so whether your features would work depended on what is the ingress and the egress linecard for the given traffic flow. I supported 7600 when I worked in Cisco TAC, and half the cases started from figuring out to which linecards the source and destination are connected and then looking at a big spreadsheet which would tell whether this setup is supposed to work.

Another problem with pipeline architectures is that if a certain feature can’t be handled by ASIC, packets will be punted to CPU. Apart from potentially causing a huge performance problem, it can also break features (QoS, ACL etc) because the packet is taken out of the pipeline for CPU handling. This is an endless rabbit hole. How about sending first TCP SYN packet to CPU for MSS clamping (and then let the CPU do ECMP hashing), but let ASIC do ECMP hashing for other packets of the same flow, if we are using anycast and want all packets of the flow to be sent out of the same interface? This doesn’t occur (or occurs much less) in “True routers” with CPU-based architectures. To make it worse, it is often very unobvious whether the packet will be handled by CPU or ASIC. You can configure exactly the same thing in slightly different ways and get different results (e.g. on our beloved 6500/7600, PBR with set interface vs set ip nexthop, or empty sequence PBR – I crashed a border router once this way!). Or many GRE tunnels terminated on the same IP, the list goes on and on. And of course recirculation, which not only halves performance but can also break some features.

When Cisco announced EoL of 6500 and 7600, they advised that customers replace 6500 with Nexus 7000 (switch with a switch) and 7600 with ASR9000 (router with a router). N7k and ASR9k are fundamentally different platforms with different architecture and different OS, also the latter is much more expensive. So lots of people thought that they can use N7k where they used 6500 or 7600 – for example, to run MPLS services or RSVP-TE. This was a very bad idea. The only time in my life I saw a routing loop in a single-area link-state IGP (which in theory should not be possible) was on Cisco Nexus. And the MPLS implementation there was even worse. Anyway, this is a rant about software quality, not platform architecture per se.

As Mark Twain famously said – “History doesn’t repeat itself, but it often rhymes.” So 20 years after the 6500/7600, we got Arista 7280/7500R series – initially marketed as switches, but with development of an advanced routing suite in EOS, the next R2 and R3 models are marketed as routers. Indeed, they can handle BGP full view and support most MPLS and other advanced routing features, but they don’t support all of those simultaneously. So if you want to run MPLS-EVPN, BGP full view, advanced QoS, BGP flowspec and VXLAN routing on the same box – good luck with that. And then there are countless small annoying bugs or hardware limitations related to encapsulations – GRE (MTU enforcement, TTL handling), MPLS (max labels to push, explicit null handling, ECMP hashing), UCMP and the list goes on and on. All of those things sort of work, until they don’t and then the customer is not happy to hear that a box they bought as a router is, in fact, a fancy switch.

To conclude, what is the difference between routers and switches in my opinion? I have absolutely no idea. All vendors use merchant silicon now, so they will run into similar limitations, but might try to work around them in different ways. I think the discussion about platform architecture (hardware pipeline vs software forwarding) can be more productive.


  1. I think with "Cisco QFX" he meant Cisco QuantumFlow Processor (QFP). "QFX" is from another vendor.

  2. that happens when marketing wins over engineering, and both the vendor you mentioned are the champions in this. it's time for someone (like Ivan) to explain you can't do things on a shoestring.

  3. For some additional historical context, similar to this L2/L3 demarcation there was the convergence of "Service Delivery Platforms" and "Edge Routers" around 2002:

  4. Re router vs switch, I think we indeed need to place things in their historical context to get a clear distinction. People who argue that MPLS is tunneling for example, miss the point because they argue out of context. Tunneling is a blanket term, so it can be attributed to anything, but the history of MPLS shows clearly it was an evolution of ATM, and so, it should be associated with virtual circuit, not tunnel.

    Going by that, a router was originally defined as a device that moves packets based on L3 information, and a switch, layer 2. Whether the function is performed in hardware or software is beside the point. So anything that makes use of L3 or higher-layer headers is a router. A layer 3 or Layer 7 switch therefore, is just marketing BS.

    As for the Cat6500, our core and Internet border routers are both Cat6807-XL, a beefed-up form of Cat6500, and they're rock solid because we don't turn on a lot of nightmarish featurism like those Dmytro described. Cat6500 architecture is close to an engineering masterpiece, with the crossbar having a 3x speedup (for supervisor 720), so its work-conserving capacity is high. Even now we don't have routers of similar size having better architecture, if not worse -- in this regard, the promise of whitebox switching enabling innovation in hardware seems to have fallen flat. And another thing should serve as a proxy for SDN's expected performance: A Cat6500 (with supervisor engine 720) without DFC has a switch capacity of 30Mpps, while one with DFC, 400Mpps.

Add comment