Recently I’ve stumbled across a year-old post by James Ventre describing the reasons output rate on an Ethernet-type interface (as reported by the router) never reaches the actual interface speed. One of them: inter-frame/packet gap (IPG).
I was stunned ... I remember very well the early days of thick/thin coax Ethernet when the IPG was needed for proper carrier sense/collision avoidance detection (probability of a collision decreases drastically as you introduce IPG), but on a high-speed point-to-point full duplex link? You must be kidding.
Unfortunately, that’s not the case. IEEE 802.3 standard is very specific. Section 126.96.36.199.1, full duplex operation:
After the last bit of a transmitted frame, (that is, when transmitting changes from true to false), the MAC continues to defer for a proper interPacket-Gap (see 188.8.131.52.2)
And further on (184.108.40.206.2):
This is intended to provide interpacket recovery time for other CSMA/CD sublayers and for the physical medium.
What recovery time? What is there to recover if you continuously transmit on a full-duplex channel?
Next line of thought: maybe IPG is really needed to make sure your brand-new $9.99 Fast Ethernet switch can still work with the ancient 10 Mbps NE1000 NIC. But how does enforcing the same behavior on 10GE make sense? According to the Section 4.4.2 of the IEEE 802.3 standard, the IPG is 96 bits regardless of MAC data rate. Can anyone enlighten me? Or is IPG just another one of those you-don’t-want-to care things?