[j-nsp] icmp problems tracing through m20's

Jared Mauch jared at puck.nether.net
Thu May 22 18:11:39 EDT 2003


On Thu, May 22, 2003 at 09:34:41PM +0100, variable at ednet.co.uk wrote:
> On Thu, 22 May 2003, David Brazewell wrote:
> 
> > has anyone ever experienced problems where icmp traces (using mtr) to 
> > destinations through an m20 show no packet loss at the last hop but 
> > varying amounts of packet loss on one of the juniper interfaces?
> 
> Hi Dave,
> 
> Unless you've cranked up the packet rate on MTR (from the default 1 packet
> per second), I wouldn't have thought that the rate limiting would be a
> factor on this unless you specifically set the rate limit really low (the
> defaults shouldn't be an issue).

	Last I knew this was a unchangable default that Juniper imposed
in a software release.

> > tracing to the juniper itself shows no packet loss.
> 
> Have your tried running mtr with the -n flag to make sure that the packet
> interface you are seeing that packet loss on is the same one as the one
> you are tracing to directly?  Do you have default-address-selection
> enabled?  Might also be worthwhile either setting up a filter to log ICMP
> packets from the IP you're running MTR from and/or tcpdumping the LAN
> segment(s) it's on to make sure the packets are making all the way to the
> M20.
> 
> Do you monitor the load on the M20?  Is it busy?  Does show system
> statistics icmp show any other reasons for drops other than rate limiting?

	We've seen interesting things if you are using the
http://www.secsup.org/Tracking/ (backscatter) style traceback
on some devices if there is a large DoS that you blackhole on those
routers.  There's some ways you can detect it I believe if you login
to the FPC.  I believe there is an ER open to make this easily cli-visible
in future releases.

	- Jared

-- 
Jared Mauch  | pgp key available via finger from jared at puck.nether.net
clue++;      | http://puck.nether.net/~jared/  My statements are only mine.


More information about the juniper-nsp mailing list