[c-nsp] MLPPP loss

a. Rahman Isnaini r. Sutan risnaini at indo.net.id
Fri Nov 3 02:55:42 EST 2006


I've done this before "set & unset ppp multilink fragmentation" on the multilink interface.
Without gave me any good improvement of the link quality.

It's been a month to now by re-apply "ppp multilink fragmentation", gives me amazing 1% loss.

Rgs / WasSalam,
a.Rahman Isnaini.r.sutan
Network Operation

a. Rahman Isnaini r. Sutan wrote:
> ro2#sh ppp multilink
> 
> Multilink1, bundle name is ro1
>   Bundle up for 4d11h
>   11 lost fragments, 23488 reordered, 1 unassigned
>   11 discarded, 11 lost received, 59/255 load
>   0x12971 received sequence, 0x27F20 sent sequence
>   Member links: 2 active, 0 inactive (max not set, min not set)
>     Serial3/1, since 4d11h, last rcvd seq 012974
>     Serial3/2, since 00:04:45, last rcvd seq 012973
> 
> Oli, I just loaded the traffic over the bundle.
> Currently i'm aggregating 2 x E1s to accommodate more than 2 Mbps traffic.
> Pinging with 5000 & 100 datagram size would gave this :
> 
> 
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!.!..!.!!!!.!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!.!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!.!!!!!!!.!!!!!!!!!!!!!!!!!.!!!!!!
> !!!!..!!!!.!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!
> !!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!.!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!.!!.!!!!!!!!!!!!!!!.!!!!!!!!.!!!!!!!!.!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!...!!!!!!!!!!!!..!!!!!!!
> .!!!!.!..!.!.!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!.!
> !!!!!!.!!!!!!!!!!.!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!
> 
> !!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!..!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!....!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!..!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!..!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!..!!!!!!!!!!!!!!.!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> !!!!!!!!!!!!!!!!!!!!
> Success rate is 98 percent (980/1000), round-trip min/avg/max = 4/11/92 ms
> 
> Salam,
> a.Rahman Isnaini.r.sutan
> Network Operation
> 
> 
> 
> 
> Oliver Boehmer (oboehmer) wrote:
>> a. Rahman Isnaini r. Sutan <mailto:risnaini at indo.net.id> wrote on
>> Friday, November 03, 2006 8:14 AM:
>>
>>   
>>> Hi Oli,
>>>
>>>
>>> Here are the output :
>>>
>>> ro1#sh ppp multilink
>>> Multilink1, bundle name ro2
>>>   Bundle up for 4d10h
>>>   0 lost fragments, 0 reordered, 0 unassigned
>>>   0 discarded, 0 lost received, 167/255 load
>>>   0x0 received sequence, 0x0 sent sequence
>>>   Member links: 1 active, 0 inactive (max not set, min not set)
>>>     Serial5/1, since 4d10h, no frags rcvd
>>>
>>> ro2#sh ppp multilink
>>> Multilink1, bundle name is ro1
>>>   Bundle up for 4d10h
>>>   0 lost fragments, 0 reordered, 0 unassigned
>>>   0 discarded, 0 lost received, 78/255 load
>>>   0x0 received sequence, 0x0 sent sequence
>>>   Member links: 1 active, 0 inactive (max not set, min not set)
>>>     Serial3/1, since 4d10h, no frags rcvd
>>>
>>>
>>> We didn't load this multilink any longer with any packet
>>> since loss happened.
>>> The config pretty much standard, as similar config has been
>>> running properly on other backbone routers.
>>>     
>>
>> well, you possibly cleared the counters sometimes in the past, so there
>> are no drops shown on the bundle. I can't say anything unless you send
>> traffic over the bundle again and observe drops. Then this output along
>> with a "show int" of the individual E1s could help.
>>
>> Possible issues could be a large differential delay on the links (i.e.
>> the rtt on the individual E1 differs greatly) which causes problems in
>> the re-assembly, but this is just a wild guess..
>>
>> 	oli
>>
>>
>>   


More information about the cisco-nsp mailing list