Multilink bundles have varying bandwidth

I have always intuitively assumed that the interface bandwidth on MLPPP bundles is the sum of interface bandwidths of individual interfaces that are part of the bundle. Recently I’ve tested my assumption and it works as expected.

For example, with the following interface setup …

interface Multilink1
 ip unnumbered Loopback0
 ppp multilink
 ppp multilink group 1
!
interface Serial1/4
 bandwidth 2000
 encapsulation ppp
 ppp multilink
 ppp multilink group 1
!
interface Serial1/5
 bandwidth 4000
 encapsulation ppp
 ppp multilink
 ppp multilink group 1
 serial restart-delay 0

… the bandwidth of the Multilink1 interface is 6000 kbps if both serial lines are up …

Rtr#show interface Multilink 1 | inc protocol|BW
Multilink1 is up, line protocol is up
  MTU 1500 bytes, BW 6000 Kbit, DLY 20000 usec,

… but drops to 4000 kbps when the Serial1/4 is disconnected:

%LINEPROTO-5-UPDOWN: Line protocol on Interface Serial1/4, →
changed state to down Rtr#show interface Multilink 1 | inc protocol|BW Multilink1 is up, line protocol is up MTU 1500 bytes, BW 4000 Kbit, DLY 20000 usec,

6 comments:

  1. You are very much right, I want to add one more thing that1200 bytes are used for single interface for buffers. One more problem which I have checked in our environment which is the problem of multilink when it goes to 9 E1.If i use 9 e1s for a multilink it goes hang and if i remove the e1 and get it back to 8 or less than 8 it works fine.
    Any experience on this.

    ReplyDelete
  2. I think it depends on particular h/w heavily. I've got positive experience w/ 12xE1 bundle on 7206VXR/NPE-G1 (12.3 mainline).

    Note: Big number of physical links may cause extensive reordering of fragments --> less efficiency/high CPU usage.

    ReplyDelete
  3. I am using 7206VXR/NPE-400 with 12.4. But still the problem persists. Not able to understand whats the reason.

    ReplyDelete
  4. As Uri told you, it's probably hardware dependent. You should open a case with Cisco TAC.

    ReplyDelete
  5. Did you even notice, 1/4 was configured with bandwitdth 2000 and 1/5 bandwith 4000. Hence the 4000 bandwidth with 1/4 shutdown makes perfect sense.

    ReplyDelete
  6. Of course I noticed :) The point of the post was that the interface bandwidth changes dynamically based on what's in the bundle.

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Ivan Pepelnjak, CCIE#1354, is the chief technology advisor for NIL Data Communications. He has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced technologies since 1990. See his full profile, contact him or follow @ioshints on Twitter.