[c-nsp] MLPP problems
Pavel Dimow
paveldimow at gmail.com
Sun Jan 27 12:42:00 EST 2013
Hi Anton,
thank you very much for your answer, I still have this issue and till
now I have following situation:
At this time I can't change IOS to support timeout smaller then 1sec.
Here are stats from CPE
Virtual-Access4, bundle name is LNS-01-01
Endpoint discriminator is LNS-01-01
Bundle up for 00:07:50, total bandwidth 1024, load 26/255
Receive buffer limit 48768 bytes, frag timeout 1000 ms
Interleaving enabled
Using relaxed lost fragment detection algorithm.
Dialer interface is Dialer1
0/0 fragments/bytes in reassembly list
2 lost fragments, 2201 reordered
0/0 discarded fragments/bytes, 0 lost received
0x4537 received sequence, 0x4978 sent sequence
Member links: 4 (max not set, min not set)
Vi2, since 00:07:50, 1600 weight, 1436 frag size, unsequenced
Vi1, since 00:07:50, 1600 weight, 1436 frag size, unsequenced
Vi3, since 00:07:50, 1600 weight, 1436 frag size, unsequenced
Vi5, since 00:07:27, 1600 weight, 1436 frag size, unsequenced
No inactive multilink interfaces
and from LNS
Virtual-Access145, bundle name is !removed
Username is !removed
Endpoint discriminator is cpe-01-01
Bundle up for 00:07:39, total bandwidth 9216, load 1/255
Receive buffer limit 48768 bytes, frag timeout 1000 ms
Using relaxed lost fragment detection algorithm.
0/0 fragments/bytes in reassembly list
8 lost fragments, 2797 reordered
0/0 discarded fragments/bytes, 0 lost received
0x47DB received sequence, 0x43AE sent sequence
Member links: 4 (max not set, min not set)
lac_ggt_lw:Vi108 (x.x.x.x), since 00:07:39, 28800 weight, 1496
frag size, unsequenced
lac_ggt_lw:Vi287 (x.x.x.x), since 00:07:39, 28800 weight, 1496
frag size, unsequenced
lac_ggt_lw:Vi198 (x.x.x.x), since 00:07:39, 28800 weight, 1496
frag size, unsequenced
lac_ggt_lw:Vi443 (x.x.x.x), since 00:07:16, 28800 weight, 1496
frag size, unsequenced
ppp link reorders does not make any difference
current config on cpe is as
controller DSL 0/0/0
mode atm
line-term cpe
line-mode 2-wire line-zero
dsl-mode shdsl symmetric annex B
line-rate auto
interface ATM0/0/0
no ip address
atm restart timer 300
atm ilmi-keepalive
pvc 1/32
vbr-nrt 512 256 20
encapsulation aal5snap
pppoe-client dial-pool-number 1
interface Dialer1
mtu 1440
ip address negotiated
encapsulation ppp
ip tcp adjust-mss 1400
dialer pool 1
dialer idle-timeout 0
no cdp enable
ppp authentication chap pap callin
ppp chap hostname !removed
ppp chap password !removed
ppp pap sent-username !removed password !removed
ppp ipcp route default
ppp link reorders
ppp multilink
ppp multilink fragment delay 50
ppp multilink interleave
ppp multilink queue depth qos 255
Any thoughts on this? Any help/ideas are highly appriciated
On Sun, Jan 27, 2013 at 3:55 AM, Anton Kapela <tkapela at gmail.com> wrote:
> On Sat, Jan 26, 2013 at 4:21 AM, Pavel Dimow <paveldimow at gmail.com> wrote:
>> Hi,
>>
>> I have a strange trouble with MLPP with four SHDSL links. The problem
>> is that a few seconds or shall I say a minute everything works fine,
>> then we suddenly experience huge latency ie from 14ms to 1000ms and
>
> that's the default timeout for ppp reordering receive buffer - I'd
> suggest trying tweaks like:
>
> --read up on 'ppp timeout multilink lost-fragment' as segment ordering
> is part of multilink; the default is 1000 msec.
>
> http://www.cisco.com/en/US/docs/ios/12_2t/dial/command/reference/dftmupp.html#wp1135996
>
> one note: to support less than 1 second reordering timouts, you will
> need fairly specific code (12.2SB, 12.4(24)T, or 15 and later) on BOTH
> ENDS of the link.
>
> without looking at your mlppp bundle stats, I can't know which end is
> 'holding up the show' due to lost fragments. it could be the uplink
> from the client, or the downlink to them, which is not getting all the
> mlppp frames delivered.
>
> --experiment with 'ppp link reorders' and short-as-you-can-tollerate
> mlppp reordering timeouts to see if this issue changes character as
> you adjust parameters
>
> --lastly, are you doing MLPPPoA via static PVC's and subints per xDSL
> link? if so, be sure you've set your ATM pvc to vbr-nrt mode, and are
> cell shaping to slightly less than the xDSL physical layer bandwidth
> -- this ensures congestion and queuing happens in your BRAS box, and
> not on the crappy fifo's on god-knows-what DSLAM your client hangs off
> of... this will rain on anyones mlppp parade quickly, and definitely
> precipitate issues like this.
>
> -Tk
More information about the cisco-nsp
mailing list