[c-nsp] Multilink PPP incorrect weights

Oliver Boehmer (oboehmer) oboehmer at cisco.com
Wed Jan 5 09:11:18 EST 2005


> Disabling fragments on the bundle hasn't helped, the downstream is
> still restricted to the speed of a single link.

Hmmm, how did you test this? With a single TCP connection or multiple
ones? Are you sending more or less the same amount of data over each
member link now you've disabled fragmentation?
 
> The response I got from the telco was that bonding wasn't a supported
> feature of their wholesale product, so are unwilling to do anything.
> Apparently they have submitted a feature request to Juniper, but I
> don't expect to get anywhere there.
> 
> I think I'll have to go down the route of modifying the bandwidth on
> the virtual-template each time a session is established, although
> this seems like an awful kludge.

I'd be interested how the vaccess counters look like in all three cases,
i.e.
1) fragmentation + different bw
2) fragmentation + same bw (i.e. after adjusting vtemplate bw)
3) fragmentation disabled + different bw

do you see any difference in "show ppp multilink" on the CPE?

	oli

> On Wed, 05 Jan 2005, Oliver Boehmer (oboehmer) wrote:
>> Ben,
>> 
>> try to disable multilink fragmentation on the vtemplate ("ppp
>> multilink fragment disable"). the bundle member's weight is used to
>> calculate the fragment size (i.e. we send smaller fragments over
>> "smaller" links to compensate for the lower bandwidth).
>> Caveat of this is that the receiver might need to queue more packets
>> during re-ordering, so you want to watch "show ppp multilink" on
>> your CPE. 
>> 
>> Try to find out if the ERX is able to report a more "accurate"
>> bandwidth, I didn't find a way to overwrite/ignore the received
>> RX-speed in the L2TP-AVP.. 
>> 
>> 	oli
>> 
>> Ben White <> wrote on Wednesday, January 05, 2005 10:57 AM:
>> 
>>> Rodney,
>>> 
>>> The 2 links are both 2Mb ADSL circuits plugged into a Cisco 1721
>>> which I'm testing with.
>>> 
>>> The upstream traffic is bundling ok and I get the full double
>>> upstream. 
>>> 
>>> The bit I've no control of is the telco part, which is documented
>>> in SIN374 from http://www.sinet.bt.com/
>>> 
>>> I've got various debugs of the login process and access to the kit
>>> at both ends if anything would help diagnose the problem.
>>> 
>>> Debug at the LNS end when I first noticed the problem was showing
>>> this: 
>>> 
>>> Aug 16 08:17:56:  Tnl/Sn 2704/70 L2TP: Parse  AVP 19, len 10, flag
>>> 0x8000 (M) Aug 16 08:17:56:  Tnl/Sn 2704/70 L2TP: Framing Type 1
>>> Aug 16 08:17:56:  Tnl/Sn 2704/70 L2TP: Parse  AVP 24, len 10, flag
>>> 0x8000 (M) Aug 16 08:17:56:  Tnl/Sn 2704/70 L2TP: Connect Speed
>>> 155520000 
>>> Aug 16 08:17:56:  Tnl/Sn 2704/70 L2TP: No missing AVPs in ICCN
>>> 
>>> Aug 16 08:17:58:  Tnl/Sn 57819/71 L2TP: Parse  AVP 19, len 10, flag
>>> 0x8000 (M) Aug 16 08:17:58:  Tnl/Sn 57819/71 L2TP: Framing Type 1
>>> Aug 16 08:17:58:  Tnl/Sn 57819/71 L2TP: Parse  AVP 24, len 10, flag
>>> 0x8000 (M) Aug 16 08:17:58:  Tnl/Sn 57819/71 L2TP: Connect Speed
>>> 2315264 
>>> Aug 16 08:17:58:  Tnl/Sn 57819/71 L2TP: Parse  AVP 38, len 10, flag
>>> 0x0 Aug 16 08:17:58:  Tnl/Sn 57819/71 L2TP: Rx Speed 2315264
>>> Aug 16 08:17:58:  Tnl/Sn 57819/71 L2TP: No missing AVPs in ICCN
>>> 
>>> The established bundle interface at the LNS shows this:
>>> Virtual-Access725, bundle name is my.realm.com/mlpppuser
>>>   Bundle up for 00:01:59, 1/255 load
>>>   Receive buffer limit 24384 bytes, frag timeout 1000 ms
>>>   Using relaxed lost fragment detection algorithm.
>>>     0/0 fragments/bytes in reassembly list
>>>     0 lost fragments, 6 reordered
>>>     0/0 discarded fragments/bytes, 0 lost received
>>>     0x58 received sequence, 0x4A sent sequence
>>>   Member links: 2 (max 2, min not set)
>>>     my.realm.com:Vi2303  (10.0.9.222), since 00:01:59, 583200
>>>     weight, 1496 frag size, unsequenced my.realm.com:Vi7769 
>>> (10.0.9.222), 
>>> since 00:00:32, 8681 weight, 1496 frag size, unsequenced
>>> 
>>> Someone from the telco has previously said to me that it's due to
>>> the different LAC's, connections from Cisco6400 LACs forward the
>>> correct speed, whereas connections from ERX Juniper LACs do not.
>>> 
>>> Ideally I'd be looking for a way to reset the speeds when the
>>> session is established, preferably through the aaa profile.
>>> 
>>> The only way I've found to fix it so far is to manually modify the
>>> bandwidth on the Virtual-Template after all of the sessions are
>>> established. 
>>> 
>>> This resets the speeds on all the Virtual-Access interfaces and
>>> then both the upstream and downstream bonding work ok.
>>> 
>>> Any ideas greatly appreciated.
>>> 
>>> Ben
>>> 
>>> On Tue, 28 Dec 2004, Rodney Dunn wrote:
>>>> Ben,
>>>> 
>>>> I did a bit of reading on this and from everything I
>>>> can find the BW of the link actually doesn't come
>>>> in to play when sending data on the member links.
>>>> I mean the BW configured on the link.
>>>> 
>>>> It's the actual transmission rate of the link itself
>>>> that determines which one gets more data.
>>>> 
>>>> From what I have read about the Cisco implementation
>>>> the packets are held at the bundle interface and
>>>> are transmitted on the member links as they can handle them.
>>>> 
>>>> Now with PPPoX it gets tricky because there isn't really
>>>> a direct underlying backpressure mechanism to the bundle.
>>>> 
>>>> In your setup, where are the links actually going to?
>>>> 
>>>> Rodney
>>>> 
>>>>  On Tue, Dec 14, 2004 at 05:12:13PM +0000, Ben White wrote:
>>>>> I'm looking for a way to fix some multilink PPP via L2TP issues
>>>>> I've seen. 
>>>>> 
>>>>> The problem I have is some connections are being sent from our
>>>>> L2TP provider with incorrect connection speed values.
>>>>> 
>>>>> Eg. 2 * 2Mb links, one being established with the correct speed
>>>>> set (2Mb) and one with the incorrect speed (155Mb).
>>>>> 
>>>>> They happily get bundled together, but the multilink load sharing
>>>>> puts all of the outgoing traffic down the supposedly larger link
>>>>> and when that's full it doesn't go onto the other link.
>>>>> 
>>>>> I've tried various fixes, manually setting bandwidth on the radius
>>>>> profile/virtual-template etc, however it mostly gets ignored and
>>>>> the negotiated values override it, other things only get applied
>>>>> to the bundle interface and not the individual member links.
>>>>> 
>>>>> Getting the correct speed values sent with the connections isn't
>>>>> possible. 
>>>>> 
>>>>> Any ideas?
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Ben White
>>>>> 
>>>>> _______________________________________________
>>>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>> _______________________________________________
>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list