[c-nsp] GEIP+ high CPU
Rodney Dunn
rodunn at cisco.com
Mon Dec 20 10:18:21 EST 2004
As it appears you have found out this is
about all that card can do.
It seems I usually see questions on ignores
for these cards around 50k pps.
Your CPU gets too busy to service the rx interrupts.
Move some load off or get a device that is designed
to handle this type of load.
As others said this board was developed just
to give customers a gige *connection* to the backbone
and was never meant to do close to line rate.
I need to go back and do some homework but if I remember
correctly each VIP slot is a 300 Mbps connection to the
backplane so that's all you can get anyway.
Rodney
On Mon, Dec 20, 2004 at 11:06:30AM +0200, M.Palis wrote:
> Hello all
> We are facing a high CPU utilization on a GEIP+ (avarage 80-90%). Below is
> the output of the show interface and sh contr vip 1 proc cpu which does not
> show which process causes the high CPU and why. I enable cache flow to see
> the type of traffic that passes through the GEIP+ but it seems that traffic
> is normal.
>
> Can you suggest something that will figure out what is the cause of high CPU
> utilization?
>
>
> GigabitEthernet1/0/0 is up, line protocol is up
> Hardware is cyBus GigabitEthernet Interface, address is 000b.60fb.6820
> (bia 000b.60fb.6820)
> Internet address is x.x.x.x.
> MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
> reliability 255/255, txload 5/255, rxload 2/255
> Encapsulation ARPA, loopback not set
> Keepalive set (10 sec)
> Full Duplex, 1000Mbps, Auto-negotiation,
> output flow-control is on, input flow-control is on
> Full-duplex, 1000Mb/s, link type is auto, media type is
> output flow-control is on, input flow-control is on
> ARP type: ARPA, ARP Timeout 04:00:00
> Last input 00:00:00, output 00:00:00, output hang never
> Last clearing of "show interface" counters never
> Input queue: 0/75/24425/167 (size/max/drops/flushes); Total output drops:
> 500
> Queueing strategy: fifo
> Output queue: 0/40 (size/max)
> 30 second input rate 99831000 bits/sec, 42356 packets/sec
> 30 second output rate 232347000 bits/sec, 44137 packets/sec
> 113608673126 packets input, 35803991154611 bytes, 0 no buffer
> Received 7049101 broadcasts (916211 IP multicast)
> 0 runts, 0 giants, 412 throttles
> 0 input errors, 0 CRC, 0 frame, 235891035 overrun, 179729695 ignored
> 0 watchdog, 0 multicast, 0 pause input
> 110887072498 packets output, 68984898771503 bytes, 0 underruns
> 0 output errors, 0 collisions, 2 interface resets
> 0 babbles, 0 late collision, 0 deferred
> 2 lost carrier, 0 no carrier, 0 PAUSE output
> 0 output buffer failures, 0 output buffers swapped out
>
> sh contr vip 1 proc cpu
> show proc cpu from Slot 1:
>
> CPU utilization for five seconds: 85%/85%; one minute: 86%; five minutes:
> 86%
> PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
> 1 0 1 0 0.00% 0.00% 0.00% 0 Chunk Manager
> 2 251048 537500 467 0.00% 0.00% 0.00% 0 Load Meter
> 3 7002796 4876298 1436 0.00% 0.00% 0.00% 0 CEF process
> 4 70565776 3054576 23101 0.00% 0.14% 0.14% 0 Check heaps
> 5 0 2 0 0.00% 0.00% 0.00% 0 Pool Manager
> 6 0 1 0 0.00% 0.00% 0.00% 0 Timers
> 7 0 1 0 0.00% 0.00% 0.00% 0 Serial
> Backgroun
> 8 10944 44781 244 0.00% 0.00% 0.00% 0 IPC Dynamic
> Cach
> 9 468876 190192 2465 0.00% 0.00% 0.00% 0 CEF Scanner
> 10 0 1 0 0.00% 0.00% 0.00% 0 IPC
> BackPressure
> 11 692964 2675813 258 0.00% 0.00% 0.00% 0 IPC Periodic
> Tim
> 12 540488 2679819 201 0.00% 0.00% 0.00% 0 IPC Deferred
> Por
> 13 60196 27093 2221 0.00% 0.00% 0.00% 0 IPC Seat
> Manager
> 14 0 1 0 0.00% 0.00% 0.00% 0 SERIAL
> A'detect
> 15 0 1 0 0.00% 0.00% 0.00% 0 Critical
> Bkgnd
> 16 1825468 350873 5202 0.00% 0.00% 0.00% 0 Net
> Background
> 17 0 6 0 0.00% 0.00% 0.00% 0 Logger
> 18 1065056 2675856 398 0.00% 0.00% 0.00% 0 TTY
> Background
> 19 6532620 2675467 2441 0.00% 0.00% 0.00% 0 Per-Second
> Jobs
> 20 6679672 44771 149199 0.00% 0.00% 0.00% 0 Per-minute
> Jobs
> 21 0 1 0 0.00% 0.00% 0.00% 0 CSP Timer
> 22 0 1 0 0.00% 0.00% 0.00% 0 SONET alarm
> time
> 23 0 1 0 0.00% 0.00% 0.00% 0 Hawkeye
> Backgrou
> 24 0 1 0 0.00% 0.00% 0.00% 0 VIP Encap IPC
> Ba
> 25 0 1 0 0.00% 0.00% 0.00% 0 MLP Input
> 26 12 1 12000 0.00% 0.00% 0.00% 0 IP Flow LC
> Backg
> 27 44964204 266488100 168 0.00% 0.00% 0.00% 0 VIP MEMD
> buffer
> 28 0 1 0 0.00% 0.00% 0.00% 0 AAA
> Dictionary R
> 29 0 2 0 0.00% 0.00% 0.00% 0 IP Hdr Comp
> Proc
> 30 9387952 26219499 358 0.00% 0.00% 0.00% 0 MDFS MFIB
> Proces
> 31 1018112 1677 607103 0.00% 0.00% 0.00% 0 TurboACL
> 32 47172612 26504344 1779 0.00% 0.01% 0.00% 0 CEF LC IPC
> Backg
> 33 10743144 3454406 3109 0.00% 0.00% 0.00% 0 CEF LC Stats
> 34 0 4 0 0.00% 0.00% 0.00% 0 CEF MQC IPC
> Back
> 35 0 1 0 0.00% 0.00% 0.00% 0 TFIB LC
> cleanup
> 36 0 1 0 0.00% 0.00% 0.00% 0 Any Transport
> ov
> 37 0 1 0 0.00% 0.00% 0.00% 0 MDFS LC
> Process
> 38 0 1 0 0.00% 0.00% 0.00% 0 LI LC
> Messaging
> 39 143852 24419 5890 0.00% 0.00% 0.00% 0 Clock Client
> 40 84956 537101 158 0.00% 0.00% 0.00% 0 DBUS Console
> 41 0 1 0 0.00% 0.00% 0.00% 0 Net Input
> 42 249052 537499 463 0.00% 0.00% 0.00% 0 Compute load
> avg
> 43 0 1 0 0.00% 0.00% 0.00% 0 IP Flow
> Backgrou
> 44 120 27 4444 0.00% 0.00% 0.00% 1
> console_rpc_serv
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list