The Impact of ICMP Redirects
One of my readers sent me an interesting question after reading my ICMP Redirects blog post:
In Cisco IOS, when a packet is marked by IOS for ICMP redirect to a better gateway, that packet is being punted to the CPU, right?
It depends on the platform, but it’s going to hurt no matter what.
Regardless of how a networking device forwards packets (in software or hardware) the ICMP redirect is usually not sent from the fast path – the ICMP code has to grab a new packet buffer, create the packet, and send it to the original sender, and particularly the grab a new packet buffer operation is not well-suited to let’s move the packets as fast as possible mentality… unless of course you’ve pre-allocated plenty of buffers for ICMP replies, but even then the CPU cache misses would degrade the performance.
Summary: It might be possible to send ICMP replies from fast packet switching path in software-based packet forwarding platforms. Trying to solve the same problem in packet forwarding hardware is probably an overkill – I would expect those devices to punt offending packets to the CPU.
ICMP redirect is just an indication of potentially better forwarding path, so it can be sent in the background while the original packet has long left the box… at least in theory. I don’t know enough about actual implementations to figure out what’s really going on. Comments highly welcome!
Does that mean that when a packet needs ICMP redirect and it’s punted from a linecard (or ASIC) to the CPU its forwarding becomes totally suboptimal? Or will the CPU punted packet still use CEF forwarding on the CPU?
No idea. Feedback would be highly appreciated!
Had to admit not only the platform generating the ICMP had issue, but also the upstream router receiving it.. all the subsequent packets went nuts toward the slowest possible path!!!!
So the downstream router sending ICMP was a little more on CPU, but the upstream receiving it was either punting all the ICMP to the CPU (no bug deal unless you have the 0.0001$ CPU in it, sometimes it happens), but if understood and used, this ICMP caused all the flows for that destination to being punted to CPU.
I observed it on JunOS EX series (upstream, 100℅ CPU with 2-3 Mbit/s), on Brocade MLX (downstream, couple on Mbit/s, upstream tens of Mbit/s), fortigate (upstream, in the order of less 1 Mbit/s).
The less worse implementation I saw was Cisco (classical old platforms like IOS 6500/3750/2600/2800 etc. and also some Nexus 7k).
Basically all the low cost implementation using very low powerful CPU on the control plane were really shitty..
If you do a netdr (CPU capture) or ELAM capture (packet capture in the hardware path - i.e. dataplane) of the packets, you will see it uses a special destination index, that is 0x7f07 which means "Punt to CPU for ICMP Redirect". I cannot share any internal documentation but the closest I could find is this:
http://certification.codergenie.com/certification/post/2013/12/15/Troubleshooting-Routing-Loops-On-IOS-And-IOS-XR.aspx
The Catalyst 4500 guide confirms this too:
"In this case, a packet is routed through the same interface, which leads to the issue of an ICMP redirect for each packet. This root cause is one of the common reasons for high CPU utilization on the Catalyst 4500."
http://www.cisco.com/c/en/us/support/docs/switches/catalyst-4000-series-switches/65591-cat4500-high-cpu.html
If, for example, there is a trunk with 2 VLAN's and traffic is routed between 2 SVI's - Brocade will send an ICMP Redirect. Not sure what is the logic behind it but looks like the fact that packet enters and leaves via the same physical(!) interface triggers this behavior.
In some cases, the box can become jammed completely with just a few Mb/s of traffic.
Richard Steenbergen summarized it perfectly some time ago (apparently, things haven't changed since 2006) :)
https://puck.nether.net/pipermail/foundry-nsp/2006-December/005390.html
-Igor