[c-nsp] GEIP+ high CPU

Security security at cytanet.com.cy
Mon Dec 20 15:44:08 EST 2004


I did not change the load interval. Load interval 30 is the default. I
check my other C7507 GEIP configurations and they all have command
load-interval 30 which is the default.

>I have seen on some versions of IOS that changing the load-interval from
the
>default has had a larger effect on performance than you might expect. If
you
>want to get the last little bit of performance out of the card I'd would
>suggest you change it back.
>
>
>Matt.
>
>-----Original Message-----
>From: cisco-nsp-bounces at puck.nether.net
>[mailto:cisco-nsp-bounces at puck.nether.net]On Behalf Of M.Palis
>Sent: 20 December 2004 11:29
>To: Amol Sapkal
>Cc: cisco-nsp at puck.nether.net
>Subject: Re: [c-nsp] GEIP+ high CPU
>
>
>Here is the config. How did you understand that is interrupted switched?
>
>interface GigabitEthernet1/0/0
> bandwidth 10000000
> ip address x.x.x.x.x.  no ip redirects
> no ip proxy-arp
> ip ospf message-digest-key 5 md5 7 xxxxxxxxxxxxxx
>no ip mroute-cache
> load-interval 30
> negotiation auto
> no cdp enable
> standby 40 ip x.x.x.x
> standby 40 priority 120
> standby 40 preempt
>!
>----- Original Message ----- 
>From: "Amol Sapkal" <amolsapkal at gmail.com>
>To: "M.Palis" <security at cytanet.com.cy>
>Cc: <cisco-nsp at puck.nether.net>
>Sent: Monday, December 20, 2004 12:47 PM
>Subject: Re: [c-nsp] GEIP+ high CPU
>
>
>> Hi,
>>
>> Your process utilization output shows that all your traffic is
>> Interrupt switched (85% of 85%). I am not sure of this, but I think
>> lan interfaces (gig/fast/ethernet) should not be using interrupt
>> switching.
>>
>> Can you paste the relevant config of the gig interface?
>>
>>
>>
>> Regds,
>> Amol
>>
>>
>>
>> On Mon, 20 Dec 2004 11:06:30 +0200, M.Palis <security at cytanet.com.cy> 
>> wrote:
>>>     Hello all
>>> We are facing a high CPU utilization on a GEIP+ (avarage 80-90%). Below

>>> is
>>> the output of the show interface and sh contr vip 1 proc cpu which does

>>> not
>>> show which process causes the high CPU and why.  I enable cache flow to

>>> see
>>> the type of traffic that passes through the GEIP+ but it seems that 
>>> traffic
>>> is normal.
>>>
>>> Can you suggest something that will figure out what is the cause of
high 
>>> CPU
>>> utilization?
>>>
>>> GigabitEthernet1/0/0 is up, line protocol is up
>>>  Hardware is cyBus GigabitEthernet Interface, address is 000b.60fb.6820
>>> (bia 000b.60fb.6820)
>>>  Internet address is x.x.x.x.
>>>  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
>>>     reliability 255/255, txload 5/255, rxload 2/255
>>>  Encapsulation ARPA, loopback not set
>>>  Keepalive set (10 sec)
>>>  Full Duplex, 1000Mbps, Auto-negotiation,
>>>  output flow-control is on, input flow-control is on
>>>  Full-duplex, 1000Mb/s, link type is auto, media type is
>>>  output flow-control is on, input flow-control is on
>>>  ARP type: ARPA, ARP Timeout 04:00:00
>>>  Last input 00:00:00, output 00:00:00, output hang never
>>>  Last clearing of "show interface" counters never
>>>  Input queue: 0/75/24425/167 (size/max/drops/flushes); Total output 
>>> drops:
>>> 500
>>>  Queueing strategy: fifo
>>>  Output queue: 0/40 (size/max)
>>>  30 second input rate 99831000 bits/sec, 42356 packets/sec
>>>  30 second output rate 232347000 bits/sec, 44137 packets/sec
>>>     113608673126 packets input, 35803991154611 bytes, 0 no buffer
>>>     Received 7049101 broadcasts (916211 IP multicast)
>>>     0 runts, 0 giants, 412 throttles
>>>     0 input errors, 0 CRC, 0 frame, 235891035 overrun, 179729695
ignored
>>>     0 watchdog, 0 multicast, 0 pause input
>>>     110887072498 packets output, 68984898771503 bytes, 0 underruns
>>>     0 output errors, 0 collisions, 2 interface resets
>>>     0 babbles, 0 late collision, 0 deferred
>>>     2 lost carrier, 0 no carrier, 0 PAUSE output
>>>     0 output buffer failures, 0 output buffers swapped out
>>>
>>> sh contr vip 1 proc cpu
>>> show proc cpu from Slot 1:
>>>
>>> CPU utilization for five seconds: 85%/85%; one minute: 86%; five
minutes:
>>> 86%
>>> PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>>>   1           0         1          0  0.00%  0.00%  0.00%   0 Chunk 
>>> Manager
>>>   2      251048    537500        467  0.00%  0.00%  0.00%   0 Load
Meter
>>>   3     7002796   4876298       1436  0.00%  0.00%  0.00%   0 CEF
process
>>>   4    70565776   3054576      23101  0.00%  0.14%  0.14%   0 Check
heaps
>>>   5           0         2          0  0.00%  0.00%  0.00%   0 Pool 
>>> Manager
>>>   6           0         1          0  0.00%  0.00%  0.00%   0 Timers
>>>   7           0         1          0  0.00%  0.00%  0.00%   0 Serial
>>> Backgroun
>>>   8       10944     44781        244  0.00%  0.00%  0.00%   0 IPC
Dynamic
>>> Cach
>>>   9      468876    190192       2465  0.00%  0.00%  0.00%   0 CEF
Scanner
>>>  10           0         1          0  0.00%  0.00%  0.00%   0 IPC
>>> BackPressure
>>>  11      692964   2675813        258  0.00%  0.00%  0.00%   0 IPC 
>>> Periodic
>>> Tim
>>>  12      540488   2679819        201  0.00%  0.00%  0.00%   0 IPC 
>>> Deferred
>>> Por
>>>  13       60196     27093       2221  0.00%  0.00%  0.00%   0 IPC Seat
>>> Manager
>>>  14           0         1          0  0.00%  0.00%  0.00%   0 SERIAL
>>> A'detect
>>>  15           0         1          0  0.00%  0.00%  0.00%   0 Critical
>>> Bkgnd
>>>  16     1825468    350873       5202  0.00%  0.00%  0.00%   0 Net
>>> Background
>>>  17           0         6          0  0.00%  0.00%  0.00%   0 Logger
>>>  18     1065056   2675856        398  0.00%  0.00%  0.00%   0 TTY
>>> Background
>>>  19     6532620   2675467       2441  0.00%  0.00%  0.00%   0
Per-Second
>>> Jobs
>>>  20     6679672     44771     149199  0.00%  0.00%  0.00%   0
Per-minute
>>> Jobs
>>>  21           0         1          0  0.00%  0.00%  0.00%   0 CSP Timer
>>>  22           0         1          0  0.00%  0.00%  0.00%   0 SONET
alarm
>>> time
>>>  23           0         1          0  0.00%  0.00%  0.00%   0 Hawkeye
>>> Backgrou
>>>  24           0         1          0  0.00%  0.00%  0.00%   0 VIP Encap

>>> IPC
>>> Ba
>>>  25           0         1          0  0.00%  0.00%  0.00%   0 MLP Input
>>>  26          12         1      12000  0.00%  0.00%  0.00%   0 IP Flow
LC
>>> Backg
>>>  27    44964204 266488100        168  0.00%  0.00%  0.00%   0 VIP MEMD
>>> buffer
>>>  28           0         1          0  0.00%  0.00%  0.00%   0 AAA
>>> Dictionary R
>>>  29           0         2          0  0.00%  0.00%  0.00%   0 IP Hdr
Comp
>>> Proc
>>>  30     9387952  26219499        358  0.00%  0.00%  0.00%   0 MDFS MFIB
>>> Proces
>>>  31     1018112      1677     607103  0.00%  0.00%  0.00%   0 TurboACL
>>>  32    47172612  26504344       1779  0.00%  0.01%  0.00%   0 CEF LC
IPC
>>> Backg
>>>  33    10743144   3454406       3109  0.00%  0.00%  0.00%   0 CEF LC 
>>> Stats
>>>  34           0         4          0  0.00%  0.00%  0.00%   0 CEF MQC
IPC
>>> Back
>>>  35           0         1          0  0.00%  0.00%  0.00%   0 TFIB LC
>>> cleanup
>>>  36           0         1          0  0.00%  0.00%  0.00%   0 Any 
>>> Transport
>>> ov
>>>  37           0         1          0  0.00%  0.00%  0.00%   0 MDFS LC
>>> Process
>>>  38           0         1          0  0.00%  0.00%  0.00%   0 LI LC
>>> Messaging
>>>  39      143852     24419       5890  0.00%  0.00%  0.00%   0 Clock 
>>> Client
>>>  40       84956    537101        158  0.00%  0.00%  0.00%   0 DBUS 
>>> Console
>>>  41           0         1          0  0.00%  0.00%  0.00%   0 Net Input
>>>  42      249052    537499        463  0.00%  0.00%  0.00%   0 Compute 
>>> load
>>> avg
>>>  43           0         1          0  0.00%  0.00%  0.00%   0 IP Flow
>>> Backgrou
>>>  44         120        27       4444  0.00%  0.00%  0.00%   1
>>> console_rpc_serv
>>>
>>> _______________________________________________
>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
>>
>> -- 
>> Warm Regds,
>>
>> Amol Sapkal
>>
>> --------------------------------------------------------------------
>> An eye for an eye makes the whole world blind
>> - Mahatma Gandhi
>> -------------------------------------------------------------------- 
>
>_______________________________________________
>cisco-nsp mailing list  cisco-nsp at puck.nether.net
>https://puck.nether.net/mailman/listinfo/cisco-nsp
>archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>--------------------------------------------------------------------------
----
>Live Life in Broadband
>www.telewest.co.uk
>
>
>The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material.
>Statements and opinions expressed in this e-mail may not represent those
of the company. Any review, retransmission, dissemination or other use of,
or taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender immediately and delete the
material from any computer.
>
>==========================================================================
====
>
>


More information about the cisco-nsp mailing list