[f-nsp] Fwd: Route cache leak on MLX 5.0.00b ?

George B. georgeb at gmail.com
Thu Apr 7 19:52:58 EDT 2011


Should have gone to the entire list.

---------- Forwarded message ----------
From: George B. <georgeb at gmail.com>
Date: Thu, Apr 7, 2011 at 4:23 PM
Subject: Re: [f-nsp] Route cache leak on MLX 5.0.00b ?
To: Dunc <dunc.lockwood at thebunker.net>


Reboot clears the problem just fine.  It isn't a matter of too few allowed.
For some reason it can get into a state where it never clears out stale
entries.  It took six months to get into that state.  Filled up CAM on both
the primary ingress and egress modules.

After rebooting it is just fine, over 300,000 entries free.

So it isn't a config problem, it is a "holy crap, this thing just went
wonky" problem.




On Thu, Apr 7, 2011 at 2:42 AM, Dunc <dunc.lockwood at thebunker.net> wrote:

> Hi,
>
> What are your system-max values for ip-cache and ip-route set to?
>
> Are you using a CAM profile with enough room?
>
>
> I've seen very strange things happen once you hit a limit and usually
> end up changing something and rebooting.
>
> Cheers,
>
> Dunc
>
>
>
> On 07/04/11 02:28, George B. wrote:
> > Has anyone seen route cache issues on 5.0.00b?
> >
> > I have two routers both have basically the same peers:
> >
> > First one:
> >
> > IP Routing Table - 349996 entries
> >
> > Second one:
> >
> > IP Routing Table - 350213 entries
> >
> > If I do "sho ip cache" on the first unit, I get:
> >
> > Total IP and IPVPN Cache Entry Usage on LPs:
> >  Module        Host    Network       Free      Total
> >       1         131     349869          0     350000
> >       2         143     349857          0     350000
> >
> > Both units were in that state when I logged into them earlier today.
> >
> > If I rconsole to either module and "clear ip cache", it clears out maybe
> > 50 routes and that's it.
> >
> > After rebooting the second unit and letting it run for a couple of
> > hours, it looks like:
> >
> > Total IP and IPVPN Cache Entry Usage on LPs:
> >  Module        Host    Network       Free      Total
> >       1         219      16763     363018     380000
> >       2          54         33     379913     380000
> >
> > Which is much closer to what I expected to see and is closer to what
> > another pair of units have after running for several weeks (notice I
> > upped the max on this one but that doesn't really matter).  That first
> > unit is obviously wacky (but as I mentioned, both units were in that
> > state).  Another pair of units that have been running for months:
> >
> > uptime is 160 days 21 hours 40 minutes 8 seconds
> > IP Routing Table - 349593 entries
> >
> > Total IP and IPVPN Cache Entry Usage on LPs:
> >  Module        Host    Network       Free      Total
> >       1        1021      62984     285995     350000
> >       2          39       1358     348603     350000
> >
> >
> >
> > Both units were reporting stuff like this in the logs:
> >
> > Apr  7 01:26:17:I:Warning: MODULE 1 - No free cache entry for new rout
> > Apr  7 01:26:17:I:Warning: MODULE 2 - No free cache entry for new rout
> > Apr  7 01:26:08:I:Warning: cannot allocate free cache entr
> > Apr  7 01:26:04:I:Warning: cannot allocate free cache entr
> > Apr  7 01:25:07:I:Warning: cannot allocate free cache entr
> >
> > It sure looks like some kind of cache leak to me, like it can't reap
> > stale cache entries.
> >
> > Anyone seen anything like this?  Looks like a reboot clears it just fine.
> >
> ________________________________
>  This email and any attachments it may contain is confidential and solely
> intended for the use of the named addressee(s) only. Any views or opinions
> presented are solely those of the author and do not necessarily represent
> those of The Bunker. If you are not the intended recipient, be advised that
> you have received this email in error and that you should not rely on it or
> take any action based on it. You should not publish, use, disseminate,
> print, forward or copy this email as it is strictly prohibited. Please
> contact the sender if you have received this email in error and destroy it.
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20110407/102e5278/attachment.html>


More information about the foundry-nsp mailing list