Numerous residential access technologies face path MTU discovery issues. PPPoE connections (with MTU = 1492 bytes instead of 1500 bytes) is the best-known example, and we’ll see more of them as various tunneling-based IPv4-to-IPv6 transition mechanisms (6rd, DS-Lite, MAP-E) become more popular.
Obviously you could use the same old MSS clamping tricks in the brave new IPv6 world or decide (like DS-Lite) to deal with IP fragmentation in underlay access networks ... but there’s another option in the IPv6 world: reduce client-side MTU with router advertisement messages.
The murky details of IPv6 implementations never crop up till you start deploying it (or, as Randy Bush recently wrote: “it is cheering to see that the ipv6 ivory tower still stands despite years of attack by reality”).
Here’s another one: in theory the prefixes delegated through DHCPv6 should be static and
permanently assigned to the customers .
Somehow I got involved in an IPv6 RADIUS accounting discussion. This is what I found to work in Cisco IOS release 15.2(4)S:
A while ago John McManus wrote a great DSCP QoS Over MPLS Thoughts article at Etherealmind blog explaining how 6-bit IP DSCP value gets mapped into 3-bit MPLS EXP bits (now renamed to Traffic Class field). The most important lesson from his post should be “there is no direct DSCP-to-EXP mapping and you have to coordinate your ideas with the SP”. Let’s dig deeper into the SP architecture to truly understand the complexities of this topic.
We’ll start with a reference diagram: user traffic is flowing from Site-A to Site-B and the Service Provider is offering MPLS/VPN service between PE-A and PE-B. Traffic from multiple customer sites (including Site-A) is concentrated at SW-A and passed in individual VLANs to PE-A.
Following the ADSL QoS discussions, I decided to test “a few” things in the lab. I didn’t want to build a huge lab with DSL modems and DSLAM and decided to emulate an end-to-end DSL network with routers.
Step 1: Create a PPPoE session between a client (SOHO router) and a server (NAS). I’d never configured it before, so I’ve visited uncle Google first. It gave me tons of useless hits (no wonder) and a few somewhat useful ones. All of them used the old VPDN-centric PPPoE syntax. It seems I’ve stumbled across another “niche spot”: PPPoE testbed implemented with recent IOS configuration syntax (bba-group).
Two days ago I’ve managed to write aGenuineStupidity™ (OK, maybe I cannot get a trademark on this concept): the MQC shaping actions cannot be attached to a Dialer interface; they have to be specified on the underlying physical interface (in case of PPPoE link, the outside Ethernet interface).
The reason for my stupidity (apart from the obvious one: writing without testing) is the difference between true logical interfaces and dialer templates. A tunnel interface or a VLAN interface is a true logical interface; it behaves like any other interface (with a few exceptions; for example, tunnel interface does not have an output queue) and you can use most QoS actions (including shaping) on it. A dialer interface is even more “conceptual”. It can never be operational on its own – as soon as the link is established, it’s bound to a physical (for example, BRI0:1) or virtual access interface (which is yet again bound to a physical interface) and the shaping is performed on the final physical interface.
This behavior (on top of being unexpectedly inconsistent) results in interesting quirks. For example, you have to shape PPPoE packets (based on their IP characteristics) on the physical Ethernet interface which usually doesn’t even have an IP address.
… and let’s hope that the late hour hasn’t resulted in another blunder.
Based on the ADSL reference model we’ve discussed last week, let’s try to figure out how you can influence the quality of service over your ADSL link (for example, you’d like to prioritize VoIP packets over web download). To understand the QoS issues, we need to analyze the congestion points; these are the points where a queue might form when the network is overloaded and where you can reorder the packets to give some applications a preferential treatment.
Remember: QoS is always a zero-sum game. If you prioritize some applications, you’re automatically penalizing all others.
The primary congestion point in the downstream path is the PPPoE virtual interface on the NAS router (marked with a red arrow in the diagram below), where the Service Provider usually performs traffic policing. It’s better from the SP perspective to police the traffic @ NAS than to send all the traffic to DSLAM where it would be dropped in the ATM hardware. Secondary congestion points might arise in the backhaul network (if the network is heavily oversubscribed) and in DSLAM (if the NAS policing does not match the QoS parameters of the ATM virtual circuit).
I’m getting lots of ADSL QoS questions lately, so it’s obviously time to cover this topic. Before going into the QoS details, I want to make sure my understanding of the implications of the baroque ADSL protocol stack is correct.
In the most complex case, a DSL service could have up to eight separate components (including the end-user’s workstation):
- End-user workstation sends IP datagrams to the local (CPE) router.
- CPE router runs PPPoE session with the NAS (Network Access Server) and sends Ethernet datagrams to the DSL modem.
- DSL modem encapsulates Ethernet frames in RFC 1483 framing, slices them in ATM cells and sends them over the physical DSL link to DSLAM.
- DSLAM performs physical level concentration and sends the ATM cells (one VC per subscriber) into the network.
- The backhaul network (DSLAM to NAS) could be partly ATM based. The ATM cells could thus pass through several ATM switches.
- Eventually the ATM cells have to be reassembled into PPPoE frames. In a worst-case scenario, an ATM-to-Ethernet switch would perform that function.
- The backhaul network could be extended with Ethernet switches.
- Finally, the bridged PPPoE frames arrive @ NAS which terminates the PPPoE session and emits the IP datagrams into the IP core network.
Yesterday I’ve described the difference between line rate and bit rate (actually physical layer gross bit rate and physical layer net bit rate). Going to the other extreme, we can measure goodput (application-level throughput), which obviously depends on multiple factors, including the TCP window sizes and end-to-end delays. There are numerous tools to test the goodput from/to various locations throughout the world (speedtest.net worked quite nicely for me) and you’ll soon discover that the goodput on your DSL line differs significantly from what the ISP is advertising.
Recently I had to implement Internet access using ADSL as the primary link and ISDN as the backup link. Obviously the most versatile solution would use the techniques described in my IP Corner article Small Site Multi-homing, but the peculiarities of Cisco IOS implementation of the ADSL technology resulted in a much simpler solution.
In my home office, I'm using DSL access to the Internet with ISDN backup to another ISP, as shown on the next figure:
Obviously, I would like the ISDN backup to kick in whenever the primary connection goes down; two static default routes and reliable static routing on the primary default seem like a perfect solution.