[c-nsp] Interface ignored errors - sizing and allocating buffers

Rodney Dunn rodunn at cisco.com
Wed Nov 24 08:39:16 EST 2004


Maybe I missed it but you didn't say what
type of interface it is for this traffic
both ingress and egress.

Ignores are usually because you don't have
any buffers/particles to place the packet
in on the receive side.  Why you wouldn't
have free buffers/paticles can be different:
ie: CPU is too busy to process the rx ring
fast enough and replenish it with free particles
for the next packets coming in on the wire,
or another possibility is your egress interface
has a deep buffer pool and it's holding the
particles which causes the ingress to run out.

It would be nice to see the following:


clear counters

Then get the following after you turn on:

term exec prompt timestamp


sh int 
sh buff
sh int stat


every 30 seconds until you see the ignores on 
the interface.

/*Note: don't use 'sh int switching" because
it gives a bunch of junk that's not used anymore.
"sh int stat" is much cleaner to tell if you are
process switching or switching traffic under interrupt
as you should be*/

And one other thing.  Do *NOT* tune any buffers unless
you know exactly what you are doing.  I see way too many
times where people think they are making things better
by tuning buffers and in reality they make performance
worse.  There are cases where you do it but it's less
than 1% of all cases.  And even then you are usually
masking some underlying problem so please don't do it.

Rodney




On Wed, Nov 24, 2004 at 10:38:29AM +0000, Sam Stickland wrote:
> Hi Brian,
> 
> Thanks for your answer, comment are inline
> 
> On Tue, 23 Nov 2004, Brian Feeny wrote:
> 
> >
> > On Nov 23, 2004, at 7:10 PM, Sam Stickland wrote:
> >
> >> Hi,
> >> 
> >> I've got a C7206-NPE200 router that's seeing a regular amount of ignored 
> >> errors on a policed interface (input). There's also the occassional overun.
> >> 
> >> The CPU on this router is currently:
> >> 
> >> CPU utilization for five seconds: 65%/63%; one minute: 62%; five minutes: 
> >> 62%
> >> 
> >> Is this caused by not having enough buffers available, or the CPU usage? 
> >> The only interface I see this on is the one with the policer - there's 
> >> another interface that carries a similar amount of traffic and it doesn't 
> >> exhibit these issues.
> >
> > You need more info to determine the problem.  Alot of times, CPU can go up, 
> > because your running out of buffers.  But usually there is another problem 
> > which is why your running out of buffers in the first place.
> 
> My appologies.
> 
> The interface that is clocking up the ignore's carries 47Mbps inbound and 
> 40Mbps outbound at peak times. It consists of sub-interfaces, one of which 
> is shaped to 35Mbps outbound and policed to 40Mbps inbound. At peak times 
> the policer drops about 4.5Mpbs of inbound traffic. The traffic shaper is 
> less taxed, and very rarely used (Peak time usage 160kbps delayed, 8kpbs 
> dropped).
> 
> The other two fast ethernet interfaces on this box carry roughly 
> 40Mbps/40Mbps and 9Mpbs/3Mbps.
> 
> So the CPU load isn't really unexpected.
> 
> >> Is there anyway to size and allocate extra internal buffers to the ethernet 
> >> interface to prevent the errors?
> >> 
> >
> > You probably don't want to mess with interface level buffers, just stick to 
> > the global buffer stuff.
> 
> But how do I tell which buffers to raise? 'sh buffers failures' just a 
> bunch of Middle pool failures from over two days ago.
> 
> > Are you sure the traffic is kosher going thru that interface?  I ask because 
> > sometimes there is wierd stuff
> > like DoS attacks and the like, which causes buffer exhaustion.  Somethings I 
> > like to do are like enable
> > netflow on the interface and then do like "sh ip cache f0/0 flow" or whatever 
> > the interface is, and just look
> > for wierdness in the src/dst, or packet sizes, or a higher pps than normal.
> >
> > Also definitely check "show interface switching" and make sure you don't have 
> > alot of process switched packets.
> >
> > I assume  your running CEF too.
> 
> Yes, this is running CEF. 'show interface switching' shows:
> 
>            Throttle count          0
>          Drops         RP          0         SP          0
>    SPD Flushes       Fast          0        SSE          0
>    SPD Aggress       Fast          0
>   SPD Priority     Inputs     159998      Drops          0
> 
>       Protocol       Path    Pkts In   Chars In   Pkts Out  Chars Out
>          Other    Process     133913   10901616     132878    8504192
>              Cache misses          0
>                      Fast          0          0          0          0
>                 Auton/SSE          0          0          0          0
>             IP    Process    6961585  635482082    5375398  533529141
>              Cache misses          4
>                      Fast 3098391924 1122983871 2600736287  505566117
>                 Auton/SSE          0          0          0          0
>        DEC MOP    Process          0          0       2218     170786
>              Cache misses          0
>                      Fast          0          0          0          0
>                 Auton/SSE          0          0          0          0
>            ARP    Process    5439287  326560428    9505511  608352704
>              Cache misses          0
>                      Fast          0          0          0          0
>                 Auton/SSE          0          0          0          0
>            CDP    Process      22194    8921988      22150    7154450
>              Cache misses          0
>                      Fast          0          0          0          0
>                 Auton/SSE          0          0          0          0
> 
> Which looks good to me.
> 
> I since upped the hold-queue to 4096 (both in and out) on the advice of 
> another poster, but it doesn't seem to have made any difference.
> 
> I've just had another look at the interface, and interesting it's also 
> clocked up 50 giants since last night, which is interesting :/
> 
> Sam
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list