MTU issues (and TCP MSS clamping) in residential IPv6 deployments
Numerous residential access technologies face path MTU discovery issues. PPPoE connections (with MTU = 1492 bytes instead of 1500 bytes) is the best-known example, and we’ll see more of them as various tunneling-based IPv4-to-IPv6 transition mechanisms (6rd, DS-Lite, MAP-E) become more popular.
Obviously you could use the same old MSS clamping tricks in the brave new IPv6 world or decide (like DS-Lite) to deal with IP fragmentation in underlay access networks ... but there’s another option in the IPv6 world: reduce client-side MTU with router advertisement messages.
How does it work?
IPv6 router advertisement messages (defined in RFC 4861) could include MTU option to “... ensure all nodes on a link use the same MTU value in the cases where the link MTU is not well known.”
As is usually the case in networking, that option is commonly misused for a TCP MSS clamping kludge: if the router advertises an MTU lower than 1500 bytes on client-facing interfaces, the clients will pick up the lower MTU setting and start advertising lower MSS value in TCP SYN packets.
Examples: the IPv6 MTU advertised on Ethernet interfaces should be 1492 bytes in PPPoE environments and 1480 bytes in 6rd environments.
How good is this solution?
The reduced LAN MTU trick works only if:
- The end-hosts listen to the MTU option in router advertisement messages (usually they do);
- All routers attached to the LAN advertise the same value (or you have a single router, as is usually the case in residential deployments);
- You don’t mind that intra-LAN communication uses reduced MTU value (no jumbo frames).
Not exactly the best solution there is, but it’s good enough for residential deployments ... and in many cases it’s the best you can do. There’s no ipv6 tcp adjust-mss command in Cisco IOS (and I wasn’t able to find one in Junos either).
In a recent e-mail exchange Trevor Warwick claimed ip tcp adjust-mss command modifies MSS option for both IPv4 and IPv6 TCP sessions since IOS release 15.2(4)M ... without considering the larger size of IPv6 headers. An IOS developer obviously did a great job ;)
How is it configured?
Cisco IOS always advertises the link MTU in router advertisement messages. You can change the MTU with the ipv6 mtu command and inspect the contents of router advertisement messages with the debug ipv6 nd command.
ICMPv6-ND: Request to send RA for FE80::C800:8FF:FE04:6
ICMPv6-ND: Sending RA from FE80::C800:8FF:FE04:6 to FF02::1 on FastEth0/1
ICMPv6-ND: MTU = 1400
ICMPv6-ND: prefix = 2001:DB8:0:1::/64 onlink autoconfig
ICMPv6-ND: 120/60 (valid/preferred)
Some low-end CPE devices automatically reduce the LAN side MTU to reflect the WAN connectivity (PPPoE or 6rd).
More information
Building large IPv6 service provider networks webinar describes the fine points of IPv6 router advertisement mechanisms and related Cisco IOS configuration; 6rd, DS-Lite and MAP-E are described in IPv6 transition mechanisms webinar. You get access to both of them (and numerous others) with the yearly subscription.
Seems that the gap between IPv4 and IPv6 header with ip 'tcp adjust-mss' is solved on 15.2(4)M2.
You CANNOT do fragmentation over OTV or EoL2TP, you MUST have MTU larger than 1500 in transit network.
Even if you'd manage to get it to work (breaking PMTUD in the process), you'd kill the CPU in the receiving router doing the reassembly.
We are using the CSR1Ks to put in a temporary stretched VLAN for migration purposes over a 1500byte MTU transport network.
the CSR1K BU has validated the design:
http://www.cisco.com/en/US/docs/ios-xml/ios/wan/command/wan-m1.html#wp1501508090
Thank you for the response. otv fragmentation will solve our problems..
Sending RAs with reduced MTU is always an option, but every time someone suggests that, there's an ivory tower crowd jumping up and down and mourning the loss of performance.
( http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipapp/command/iap-cr-book.html )
The IPv6 experience on my 6RD residential access had deteriorated considerably while I was away for 2 months (probably some changes upstream) and the symptoms just reeked of MTU problems (endless delays, missing packets in TLS negotiations, timeouts, the lot!).
"ipv6 tcp adjust-mss 1400" on the 6RD tunnel interface cured it for good.