[j-nsp] icmp problems tracing through m20's

Josef Buchsteiner josefb at juniper.net
Sat May 24 20:14:39 EDT 2003


       David,


       let me add something here which might be useful. there are icmp
       tasks  which are handled on the PFE complex directly and don't
       even  go to the Routing Engine. ttl expired used for traceroute
       and  mtu  exceeded  are  one  of  them.  you  can  look  at the
       statistics when you enter the following command:
       show pfe statistics ip icmp

       For  the  icmp  task on the PFE there is the rate-limiter of 50
       pps   per   interface  and  500  pps  per PFE complext. T-Serie
       contains  more  then  one PFE complex usually. This is what has
       been  increased  since  version  5.3R3  do  make  traceroute more
       happier.

       show system statistics icmp is the view from the Routing-Engine
       and  not  from  the PFE ( PacketForwardingEngine). Here icmp is
       rate  limited for 1000pps with a token bucket. So ping to local
       interfaces are handled by the Routing Engine.

       I'm not sure  though  if  you  run  into  one  of  those throttled
       situation  but you can check now also with the  pfe command and
       see  if  traceroute is not happy since this would be handled on
       the PFE side and is one of the tasks MTR does.


       hope this helps
       Josef
       
       

Friday, May 23, 2003, 1:12:07 PM, you wrote:


> Hi Harry

> still not seeing anything on the icmp rate limiting:

dbrazewell at router>> show system statistics icmp
> icmp:
>         0 drops due to rate limit
>         5733 calls to icmp_error

> and the icmp_errors are not climbing as the same rate as I am seeing 
> packets being dropped in my traces

> its a similar story for policing:

dbrazewell at router>> show interfaces ge-0/3/0 extensive | match 
> polic
>     Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Policed discards: 
> 3743, L3 incompletes: 0,

> these policed discards are not increasing at the same rate either

> do you have any comment on what Niels Raijer said about changing code 
> versions? Although I suspect that this was because there are different 
> icmp throttling thresholds between versions

> Thanks

> David


> On Thu, 22 May 2003, Harry Reynolds wrote:

>> Yes, they should show up as ICMP drops. I have:
>> 
>>   .5                  .6
>> r3------------------r4
>> 
>>      10.0.2.4/30
>> 
>> At r3:
>> 
>> root at r3% ping -l 100 10.0.2.6
>> PING 10.0.2.6 (10.0.2.6): 56 data bytes
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .....................................................................
>> .........ax/stddev = 0.642/1.757/41.217/3.911 ms
>> 
>> At r4:
>> [edit]
>> lab at r4# run show system statistics | find icmp
>> icmp:
>>         173 drops due to rate limit <<<
>>         0 calls to icmp_error
>>         0 errors not generated because old message was icmp
>>         Output histogram:
>>                 echo reply: 25703
>>         0 messages with bad code fields
>>         0 messages less than the minimum length
>>         0 messages with bad checksum
>> 
>> 
>> [edit]
>> lab at r4# run show system statistics | find icmp
>> icmp:
>>         181 drops due to rate limit <<<
>>         0 calls to icmp_error
>>         0 errors not generated because old message was icmp
>>         Output histogram:
>>                 echo reply: 28611
>>         0 messages with bad code fields
>>         0 messages less than the minimum length
>>         0 messages with bad checksum
>>         0 messages with bad source address
>>         0 messages with bad length
>>         0 echo drops with broadcast or multicast destinaton address
>> 
>> Are there any policed discard occuring on the interface being pinged?
>> 
>> [edit]
>> lab at r4# run show interfaces so-0/1/0 extensive | match polic
>>     Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Giants: 0,
>> Bucket drops: 0, Policed discards: 0,
>>     Policing bucket: Disabled
>> 
>> 
>> 
>> 
>> > -----Original Message-----
>> > From: David Brazewell [mailto:davidb at ednet.co.uk]
>> > Sent: Thursday, May 22, 2003 11:21 AM
>> > To: Harry Reynolds
>> > Cc: juniper-nsp at puck.nether.net
>> > Subject: RE: [j-nsp] icmp problems tracing through m20's
>> >
>> >
>> >
>> >
>> > would this rate limiting show up in "show system statistics icmp"?
>> >
>> > cos Ive got the following on this router:
>> >
>> >         0 drops due to rate limit
>> >
>> > Cheers
>> >
>> > david
>> >
>> >
>> > On Thu, 22 May 2003, Harry Reynolds wrote:
>> >
>> > > Hello,
>> > >
>> > > I have not messed with mtr, but can confirm that ICMP
>> > rate limiting
>> > > on the fxp1 interface will result in some packet loss
>> > when performing
>> > > rapid (flood) pings that are destined to a PFE interface (this
>> > > traffic must transit fxp1). A recent email indicated
>> > these parameters
>> > > are now in effect; I have not confirmed:
>> > >
>> > > The default rate limiting is 50 per second per logical interface
>> > > and I think 500 per box per second.
>> > >
>> > >
>> > >
>> > > > -----Original Message-----
>> > > > From: juniper-nsp-bounces at puck.nether.net
>> > > > [mailto:juniper-nsp-bounces at puck.nether.net]On Behalf Of
>> > > > David Brazewell
>> > > > Sent: Thursday, May 22, 2003 11:00 AM
>> > > > To: juniper-nsp at puck.nether.net
>> > > > Subject: [j-nsp] icmp problems tracing through m20's
>> > > >
>> > > >
>> > > >
>> > > > Hi
>> > > >
>> > > > has anyone ever experienced problems where icmp traces
>> > > > (using mtr) to
>> > > > destinations through an m20 show no packet loss at the
>> > > > last hop but
>> > > > varying amounts of packet loss on one of the juniper
>> > interfaces?
>> > > >
>> > > > tracing to the juniper itself shows no packet loss.
>> > > >
>> > > > It has been suggested that this may be down to default
>> > > > icmp throttling on
>> > > > the junipers. does anyone know anything about this?
>> > > >
>> > > > Thanks
>> > > >
>> > > > David
>> > > >
>> > > >
>> > > > _______________________________________________
>> > > > juniper-nsp mailing list juniper-nsp at puck.nether.net
>> > > > http://puck.nether.net/mailman/listinfo/juniper-nsp
>> > >
>> > > --
>> > > Virus scanned by edNET.
>> > >
>> >
>> 
>> -- 
>> Virus scanned by edNET.
>> 


> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/juniper-nsp



More information about the juniper-nsp mailing list