[c-nsp] Multilink PPP Problems
Adam Piasecki
apiasecki at gmail.com
Fri Nov 4 10:49:34 EST 2005
show int stat
Serial1/0/0:0
Switching path Pkts In Chars In Pkts Out Chars Out
Processor 18 434 210 13381
Route cache 5447261 668304605 12897860 4068773436
Distributed cache 0 0 0 0
Total 5447279 668305039 12898070 4068786817
Serial1/0/1:0
Switching path Pkts In Chars In Pkts Out Chars Out
Processor 13 336 216 13086
Route cache 5463444 672031818 12897962 4068802684
Distributed cache 0 0 0 0
Total 5463457 672032154 12898178 4068815770
Serial1/0/2:0
Switching path Pkts In Chars In Pkts Out Chars Out
Processor 13 336 3628 250861
Route cache 5470222 672268607 12898279 4068875469
Distributed cache 0 0 0 0
Total 5470235 672268943 12901907 4069126330
Serial1/0/3:0
Switching path Pkts In Chars In Pkts Out Chars Out
Processor 13 336 3468 241997
Route cache 5453416 669362175 12895793 4067992397
Distributed cache 0 0 0 0
Total 5453429 669362511 12899261 4068234394
I have tried disabling fragmentation, but no luck there..
All of the T1's have the same amount of interface resets. I have
notice the whole multilink group go up and down once or twice. Could a
faulty VIP or PA-MC-4T1 be causing interface resets on all the T1's??
I guess my next step is to shut down each T1 at a time and test from there..
The ACL's are added automatically by are IDS system, i assumed they
had no effect on the individual T1's. I'll remove this and the ip
route-cache and see what happens.
On 11/4/05, Jon Lewis <jlewis at lewis.org> wrote:
> On Fri, 4 Nov 2005, Adam Piasecki wrote:
>
> > I've been through just about every thread on this site, yet seem to be at a
> > dead end. We have 4T1's configured as a MLPPP. I'm currently loosing
> > anywhere from 10% - 20% packet loss across the MLPPP Group. I get 0% packet
> > loss when the link is running 1mb/s or below. It doesn't seem to be a CPU or
> > bandwidth issue. We have other Multilink groups in the same router that
> > don't have this problem. I do notice that the T1's are taking on more
> > interface resets then others. Could this be bad hardware??? All of the T1's
>
> Interface resets would mean you're losing the T1's from time to time,
> which would explain the packet loss if they're that frequent. Usually,
> that means there's a problem with the circuits/CSU/wiring.
>
> > Reply from X.X.X.X: bytes=32 time=2218ms TTL=243
> > Request timed out. -- > discarded fragments/bytes increases
> > Request timed out. -- > discarded fragments/bytes increases
>
> Have you tried disabling fragmentation?
>
>
> > interface Serial1/0/0:0
> > bandwidth 1544
> > no ip address
> > ip access-group AZ-IN1131114133 in
> > ip access-group AZ-IN1131114133 out
> > encapsulation ppp
> > no ip route-cache
> > load-interval 30
> > tx-queue-limit 26
> > down-when-looped
> > no fair-queue
> > ppp multilink
> > multilink-group 2
>
> Why are you applying ACLs to member interfaces of a MLPPP group? I doubt
> they have any affect, but I'd get rid of them. I'm also not sure why
> you'd put no ip route-cache in the member interfaces, but I suspect it
> also has no effect.
>
> You could eliminate MLPPP from the mix and just configure these as HDLC
> and use cef per-packet load sharing (per-destination if they're doing
> VOIP) and see if you still have interface resets, odd latency, and packet
> loss.
>
> ----------------------------------------------------------------------
> Jon Lewis | I route
> Senior Network Engineer | therefore you are
> Atlantic Net |
> _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
>
More information about the cisco-nsp
mailing list