[c-nsp] Multilink PPP over LNS and links that have different bandwidth

Alberto Cruz alberto.cruz at execulink.com
Fri Nov 30 10:05:56 EST 2012


Hello everybody good afternoon. I am looking for you advice and experience.

We have been working to deploy a MLPPP bundle solution for ADSL using Cisco platform. We have a Cisco 7301 as LNS and Cisco 891 as CPE.

We have been facing some challenges because we don't have the control over the ADSL network. We are a wholesale customer from Bell.

If our customer has ADSL links using the same profile (Download speed, Upload speed) everything works fine; we got twice the speed and the routers don't show any errors about fragmentation or packet lost.
However, if our customer has ADSL links with different speeds and latency, the download traffic uses the slowest link only, and the CPE reports fragmentation errors:
**** Multilink PPP Interface at CPE ****
Virtual-Access4
  Bundle name: PPPoE-Server
  Remote Endpoint Discriminator: [1] PPPoE-Server
  Local Username: int.mlppp at execulink.com<mailto:int.mlppp at execulink.com>
  Local Endpoint Discriminator: [1] mlPPP_Test
  Bundle up for 04:46:37, total bandwidth 112, load 18/255
  Receive buffer limit 24384 bytes, frag timeout 1741 ms
  Dialer interface is Dialer1
    45/540 fragments/bytes in reassembly list
    8 lost fragments, 4485 reordered
    39/14974 discarded fragments/bytes, 7 lost received
    0x3BA8 received sequence, 0x2E3D sent sequence
  Member links: 2 (max 255, min not set)
    Vi2, since 04:46:37
    Vi3, since 04:46:37

**** Log from CPE ****
Nov 29 18:26:46.712: Vi4 MLP: Lost fragment 51E9 (RX buffer overflow), new seq 51EA
Nov 29 18:26:46.712: Vi4 MLP: Discard reassembled packet
Nov 29 18:26:46.716: Vi4 MLP: Received lost fragment seq 51A5, expecting 51EB
Nov 29 18:26:46.716: Vi4 MLP: Lost fragment 51EB (RX buffer overflow), new seq 51EC
Nov 29 18:26:46.716: Vi4 MLP: Discard reassembled packet
Nov 29 18:26:46.716: Vi4 MLP: Lost fragment 51ED (RX buffer overflow), new seq 51EE
Nov 29 18:26:46.716: Vi4 MLP: Discard reassembled
Nov 29 18:26:46.724: Vi4 MLP: Lost fragment 51F5 (RX buffer overflow), new seq 51F6
Nov 29 18:26:46.724: Vi4 MLP: Discard reassembled packet
Nov 29 18:26:46.724: Vi4 MLP: Received lost fragment seq 51AF, expecting 51F7
Nov 29 18:26:46.728: Vi4 MLP: Lost fragment 51F7 (RX buffer overflow), new seq 51F8
Nov 29 18:26:46.728: Vi4 MLP: Discard reassembled packet
Nov 29 18:26:46.728: Vi4 MLP: Received lost fragment seq 51B1, expecting 51F9

In the scenario using ADSL links with different speeds, we have noticed that the multilink interface at the LNS shows members using different weight:
***** Multilink PPP interface at LNS ****
Virtual-Access3
  Bundle name: int.mlppp at execulink.com<mailto:int.mlppp at execulink.com>
  Remote Username: int.mlppp at execulink.com<mailto:int.mlppp at execulink.com>
  Remote Endpoint Discriminator: [1] mlPPP_Test
  Local Endpoint Discriminator: [1] PPPoE-Server
  Bundle up for 02:26:41, total bandwidth 1155520, load 1/255
  Receive buffer limit 23776 bytes, frag timeout 1000 ms
  Using relaxed lost fragment detection algorithm.
    0/0 fragments/bytes in reassembly list
    0 lost fragments, 756 reordered
    0/0 discarded fragments/bytes, 0 lost received
    0xCBE received sequence, 0xA1C sent sequence
  Member links: 2 (max 255, min not set)
    3xeQl1Nk:Vi5  (192.168.32.104), since 02:26:41, 3750000 weight, 1480 frag size, unsequenced
    3xeQl1Nk:Vi4  (192.168.32.100), since 02:26:41, 583200 weight, 1480 frag size, unsequenced

We have tried to override this behavior by disabling fragmentation. Despite we can achieve the sum of the speed of the links, fragmentation errors at the CPE increase dramatically.

Is there a workaround to achieve a MLPPP bundle using links with different speed?

Can the weight assigned per multilink member be overridden?

Is it normal that the LNS uses the bandwidth information calculated from the uplink interface instead from the multilink link member?
Bundle up for 02:26:41, total bandwidth 1155520, load 1/255

Regards

Alberto




More information about the cisco-nsp mailing list