[nsp] 3550 CPU OVERLOAD 80%

Jim Devane jim at powerpulse.cc
Tue Sep 30 15:45:25 EDT 2003


Thank you for all replies so far...


I didn't think the percentages were all that high either. Yes, these
were all the highest percentages. 

I am thinking that MRTG/RRD and my own billing software might be hitting
at the same time and causing the switch to work harder? I don't know why
else SNMP Engine would be so high.

I did look at the buffers to find out the pool manager statistics. And
it is likely there is a story there, but I am not smart enough to
uncover what it is. I see a large number on very big buffers:

If anyone has the time/inclination to educate me a little further on how
to interpret this information I would be very grateful. 

I am looking into how to tune the buffers a bit better to allocate more
memory to VeryBig buffers to help stop the trim/creates from escalating
so much.

Thanks for all the help suggestions.

Jim


pwps-esw01#sh proc cpu | ex 0.00
CPU utilization for five seconds: 80%/29%; one minute: 59%; five
minutes: 66%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process 
   6   121158700 305796996        396 10.81%  3.27%  3.30%   0 Pool
Manager     
  10     4038300   9549427        422  0.08%  0.11%  0.11%   0 ARP Input

  26    22758180  15253385       1492  0.40%  0.43%  0.46%   0 Vegas
Statistics 
  32     1900480  17956722        105  0.16%  0.11%  0.10%   0 L3MD_STAT

  35    984032841098326732         89  2.70%  3.06%  3.69%   0 VUR_MGR
bg proce 
  37   103332616 439570665        235  5.98%  5.36%  7.46%   0 IP Input

  83   136201216 758460994        179 10.14%  3.16%  3.08%   0 IP SNMP

  84    42618128 305587263        139  3.60%  1.06%  1.15%   0 PDU
DISPATCHER   
  85   168998720 303439545        556 16.71%  4.74%  4.62%   0 SNMP
ENGINE


VeryBig buffers, 4520 bytes (total 1, permanent 0, peak 41 @ 5d14h):
     1 in free list (0 min, 0 max allowed)
     292747356 hits, 17174211 misses, 310003615 trims, 310003616 created
     0 failures (0 no memory)

Of course there were other sizes that had trim/creates but they were
much more reasonable (in the hundreds instead of hundreds of millions! )

Also of interest, but again, I am not sure what to make of the
information was the CPU/Interface buffers:
CPU1 buffers, 1524 bytes (total 6, permanent 6):
     1 in free list (0 min, 6 max allowed)
     144078499 hits, 24013083 fallbacks

CPU0 buffers, 1524 bytes (total 16, permanent 16):
     1 in free list (0 min, 16 max allowed)
     23993779 hits, 1517689 fallbacks 

CPU2 buffers, 1524 bytes (total 10, permanent 10):
     1 in free list (0 min, 10 max allowed)
     16738137 hits, 1954369 fallbacks




-----Original Message-----
From: Stephen J. Wilcox [mailto:steve at telecomplete.co.uk] 
Sent: Saturday, September 27, 2003 1:35 AM
To: Jim Devane
Cc: cisco-nsp at puck.nether.net
Subject: Re: [nsp] 3550 CPU OVERLOAD 80%

Those percentages arent all that high, were they the highest then? (I
usually do 
sh proc cpu | e 0.00 do get a quick list of all proceses with
interesting cpu)

The switches tend to overload when they're doing things other than
forwarding 
traffic, in particular see if you can spot any unusual bursts of link
local 
packets on any interface, very often you will find this is as a result
of arp or 
spanning tree (watch the multicast/broadcast counters, they shouldnt be
going up 
all that quickly normally)

Steve

On Fri, 26 Sep 2003, Jim Devane wrote:

> Hello All,
> 
> 
> I started mysteriously receiving these CPU Overload messages a few
days
> ago. There is no pattern to them and they clear themselves ( the
switch
> will fall back to 20% CPU or even 10%) but I am getting spikes of 85%
or
> so.
> 
> I captured a sh proc cpu when an alarm came in and I was able to see
> what the CPU was doing but I don't uynderstand what some of these
> processes are. I am in need of a steer toward info or an outright
> explanation of these few things and if they are anything to worry
about.
> In general I am not too worried since the switch does not stay at a
high
> CPU cycle level, but I would like to learn about what is causing it to
> get so high.
> 
> The sh proc cpu offenders were:
>  SNMP Engine 4 - 7 % I have a good idea about this but not sure if 4-7
> is too high or not.
> 
> IP SNMP 3 - 5 % Not sure of the difference of SNMP Engine and IP SNMP
> 
> Pool Manager 2 - 3 % Not really sure about this one.
> 
> VUR_MGR bg proc 3 - 4 % No idea
> 
> And of course IP Input was about 9 - 11 % not sure if that seems high
or
> not. I think the IP Input is probably OK.
> 
> Anyway, if there are any steers towards resources, CCO links, or
> ideas/suggestions/ explanations I would appreciate it.
> 
> 
> Thanks,
> Jim
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 



More information about the cisco-nsp mailing list