[c-nsp] [nsp] Strange multilink ppp issue with T1s

Rodney Dunn rodunn at cisco.com
Thu Jul 29 17:50:25 EDT 2004


The 75xx uses a concept called transmit accumulators to
detect congestion on the output interface.  If you are
seeing output drops you need to do:

show ppp multilink and look at the first member in the link.

Then do 'sh contr cbus | incl <interface>' and see if you
have available accumulators to transmit.  If you don't you'll
register them as output drops.  The value decrements so if
it's less than the limit consistently with a low volume
of traffic you have an accumulator loss problem.
99.99% of the time that is a software bug although I have
seen a couple instances where it can be bad hardware.
The value should never be above the limit for the output. 

You can try a ping over the bundle from a router behind the
75xx with the record option set and see if that works.
We reserve a few acc's for packets coming from process level
so a lot of times they will work.

Two things.

I never recommend MLPPP without dCEF for the 75xx.
And I also don't recommend anyone turning it on with the fix for
these bugs.

CSCin36465 Watchdog crashed because of MLPPP
CSCec00268 Input drops and * throttles on PPP multilink interface
CSCea59948 Output stuck on a T3 Port Adaptor
CSCed29590 Multilink PPP link flap causes output frozen for member link

These are critical bug fixes required for dMLPPP on a 75xx.

I spent a lot of time with a customer helping get these bugs fixed
and they have been on 12.0(25)S3 with dMLPPP on the 75xx with
no issues for over 6 months.

Hope this helps.
Rodney



On Thu, Jul 29, 2004 at 05:01:40PM -0400, Bill Wichers wrote:
> Below is my posting again from June 11. Usually a reload of the router
> will correct the issue for a time (which seems to vary somewhat), at which
> point it starts dropping packets again. We see delayed packets as well as
> dropped packets, the delays usually being about 100-200 ms while normal
> RTTs are in the 3-5 ms range. The problem seems to be unrelated to loading
> on the link.
> 
>      -Bill
> 
> ---- snip 8< ----
> We have three T1 circuits on one end (a 7507) going into a PA-MC-4T1 in a
> vip2-40, the other end consists of three channels on a DS3 in a PC-MC-2T3+
> on another vip2-40 in one side of a 7576. Both routers have RSP4's running
> IOS v12.2(6a).
> 
> What we are seeing is the the multilink bundle has large levels of packet
> loss at times, and at other times none -- but all three T1 circuits are
> clean at all times. The link seems to have unpredictable lag (and behaves
> especially poorly in packet lag and loss when loaded) regardless of the
> settings for fragmentation. The strangest part is that the 7507 end sees
> the circuit frequently moving far more traffic than the bundles max
> capacity of 4.6 Mb/s, while the other end sees more believeable traffic
> numbers. The available bandwidth is reported correctly on both ends. I
> included results of 'show ppp multilink' and 'show multilink1' for each
> end. And we have tried the 7507 both with distributed CEF and without. No
> difference... Fragmentation is disabled on both ends at the moment (and in
> the 'show' info below).
> 
> Really hoping someone can provide a bit of insight since we can't seem to
> find any info on this anywhere and have tried everything we can think of
> to fix it.
> 
>      -Bill
> 
> [7507 begin]
> troy1>show int multilink1
> Multilink1 is up, line protocol is up
>   Hardware is multilink group interface
>   Internet address is x.x.x.x/30
>   MTU 1500 bytes, BW 4608 Kbit, DLY 100000 usec,
>      reliability 255/255, txload 27/255, rxload 135/255
>   Encapsulation PPP, loopback not set
>   Keepalive set (10 sec)
>   DTR is pulsed for 2 seconds on reset
>   LCP Open, multilink Open
>   Open: IPCP
>   Last input 00:00:00, output never, output hang never
>   Last clearing of "show interface" counters 4w3d
>   Queueing strategy: fifo
>   Output queue 0/40, 2069415 drops; input queue 0/75, 396 drops
>   30 second input rate 2442000 bits/sec, 654 packets/sec
>   30 second output rate 5698000 bits/sec, 1018 packets/sec
>      692034835 packets input, 3061367520 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
>      1017262671 packets output, 1171349294 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 output buffer failures, 458140128 output buffers swapped out
>      0 carrier transitions
> troy1>
> troy1>show ppp multilink
> 
> Multilink1, bundle name is sfld2core1
>   Bundle up for 3w0d
>   118828 lost fragments, 43010323 reordered, 5 unassigned
>   120023 discarded, 120023 lost received, 1/255 load
>   0x534FDF received sequence, 0x192D02 sent sequence
>   Member links: 3 active, 0 inactive (max not set, min not set)
>     Serial4/0/1:0, since 3w0d, last rcvd seq 534FE7
>     Serial4/0/2:0, since 3w0d, last rcvd seq 534FDF
>     Serial4/0/0:0, since 1w2d, last rcvd seq 534FE6
> [7507 end]
> 
> [7576 begin]
> sfld2core1>show int multilink1
> Multilink1 is up, line protocol is up
>   Hardware is multilink group interface
>   Internet address is y.y.y.y/30
>   MTU 1500 bytes, BW 4608 Kbit, DLY 100000 usec,
>      reliability 255/255, txload 4/255, rxload 164/255
>   Encapsulation PPP, loopback not set
>   Keepalive set (10 sec)
>   DTR is pulsed for 2 seconds on reset
>   LCP Open, multilink Open
>   Open: IPCP
>   Last input 00:00:02, output never, output hang never
>   Last clearing of "show interface" counters 3w0d
>   Queueing strategy: fifo
>   Output queue 0/40, 0 drops; input queue 0/75, 5 drops
>   30 second input rate 2972000 bits/sec, 587 packets/sec
>   30 second output rate 86000 bits/sec, 128 packets/sec
>      335627760 packets input, 2579977254 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
>      124442011 packets output, 1807068880 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 output buffer failures, 0 output buffers swapped out
>      0 carrier transitions
> sfld2core1>
> sfld2core1>show ppp multilink
> Multilink1, bundle name is troy1
>   Bundle up for 3w0d
>   Bundle is Distributed
>   5318 lost fragments, 3944230 reordered, 0 unassigned
>   441 discarded, 441 lost received, 84/255 load
>   0x193A1C received sequence, 0x555F07 sent sequence
>   Member links: 3 active, 0 inactive (max not set, min not set)
>     Serial1/0/0/25:0, since 3w0d, no frags rcvd
>     Serial1/0/0/24:0, since 3w0d, no frags rcvd
>     Serial1/0/0/26:0, since 1w0d, no frags rcvd
> [7576 end]
> 
> > Can you ask the question again for me?
> >
> > What platform?
> > What code?
> > What problem?
> >
> > Thanks,
> > Rodney
> >
> > On Thu, Jul 29, 2004 at 01:56:18PM -0400, Bill Wichers wrote:
> >> I'm assuming you are referring to my post from some time back.
> >> Unfortunatly, no, I have not found any solutions to the problem and I've
> >> tried just about everything I can find info on. This seems to be a big
> >> stumper -- I didn't even get any responses here :-(
> >>
> >> Our ultimate solution was to cheat and order a DS3 with has the added
> >> advantage of being faster too... Since there is already fiber in the
> >> building, and the other end is our facility in a CO, this was a
> >> reasonable
> >> (although more expensive) option for us. I would still like info on the
> >> MPP issue since we do have some POPs still fed this way, but for
> >> whatever
> >> reason the problem doesn't seem to have as much effect on 2 T1 bundles
> >> as
> >> bundles with 3 or more.
> >>
> >>      -Bill
> >>
> >> > Did you every find a solution to this problem?  I have a problem that
> >> is
> >> > very very similar.  Thanks in advance.
> >> >
> >> >
> >> >
> >> > John
> >> >
> >> *****************************
> >> Waveform Technology
> >> UNIX Systems Administrator
> >>
> >>
> >> _______________________________________________
> >> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/cisco-nsp
> >> archive at http://puck.nether.net/pipermail/cisco-nsp/
> >
> 
> 
> *****************************
> Waveform Technology
> UNIX Systems Administrator
> 


More information about the cisco-nsp mailing list