I have a GEIP+ card that keeps getting overruns. When looking inside the
VIP (sho cont g0/0/0) I see mant missed packets:
FX1000 Statistics (PA0)
CRC error 0 Symbol error 0
Missed Packets 34758106 Single Collision 0
Excessive Coll 0 Multiple Coll 0
Late Coll 0 Collision 0
Defer 0 Receive Length 0
Sequence Error 0 XON RX 0
XON TX 1 XOFF RX 0
XOFF TX 4 FC RX Unsupport 0
Packet RX (64) 32526089 Packet RX (127) 0
Packet RX (255) 0 Packet RX (511) 0
Packet RX (1023) 0 Packet RX (1522) 0
Good Packet RX 32526092 Broadcast RX 1
Multicast RX 0 Good Packet TX 0
Good Octets RX.H 0 Good Octets RX.L 2081669888
Good Octets TX.H 0 Good Octets TX.L 679494
RX No Buff 0 RX Undersize 0
RX Fragment 0 RX Oversize 0
RX Octets High 0 RX Octets Low 11222016
TX Octets High 0 TX Octets Low 679814
TX Packet 6364 RX Packet 67284211
TX Broadcast 4 TX Multicast 907
Packet TX (64) 5448 Packet TX (127) 0
Packet TX (255) 0 Packet TX (511) 907
Packet TX (1023) 4 Packet TX (1522) 0
What does missed packets mean? On show int I see overruns at 54Kpps on a
GEIP+ (VIP4-80)!
VIP-Slot0#sho int
GigabitEthernet0/0 is up, line protocol is up
Hardware is WISEMAN, address is 00d0.7939.a800 (bia 00d0.7939.a800)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, rely 255/255, load 1/255
Encapsulation ARPA, loopback not set
Full-duplex mode, link type is autonegotiation, media type is SX
output flow-control is on, input flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output never, output hang never
Last clearing of "show interface" counters never
Queueing strategy: fifo
Output queue 0/40, 0 drops; input queue 0/75, 0 drops
5 minute input rate 19999000 bits/sec, 54343 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
1062330 packets input, 48867180 bytes, 0 no buffer
Received 2 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 68038990 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 packets output, 0 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 babbles, 0 late collision, 0 deferred
3 lost carrier, 0 no carrier, 4 pause output
0 output buffer failures, 0 output buffers swapped out
The config is very simple:
ip flow-cache feature-accelerate
ip cef distributed
!
interface GigabitEthernet0/0/0
ip address 192.168.100.1 255.255.255.0
no ip directed-broadcast
ip route-cache flow
ip route-cache distributed
load-interval 30
negotiation auto
I tried removing route-cache flow and distributed. I tried removing ip
flow-cache as well.
I am running with:
IOS (tm) RSP Software (RSP-PV-M), Version 12.0(14)S2, EARLY DEPLOYMENT
RELEASE SOFTWARE (fc1)
Copyright (c) 1986-2001 by cisco Systems, Inc.
Compiled Fri 12-Jan-01 12:27 by pwade
Image text-base: 0x60010950, data-base: 0x60DE2000
ROM: System Bootstrap, Version 11.1(8)CA1, EARLY DEPLOYMENT RELEASE
SOFTWARE (fc1)
BOOTFLASH: RSP Software (RSP-BOOT-M), Version 12.0(14)S2, EARLY DEPLOYMENT
RELEASE SOFTWARE (fc1)
c751e1 uptime is 15 hours, 33 minutes
System returned to ROM by reload at 08:32:55 UTC Wed Dec 6 2000
System image file is "slot0:rsp-pv-mz_120-14_S2.bin"
cisco RSP4 (R5000) processor with 262144K/2072K bytes of memory.
R5000 CPU at 200Mhz, Implementation 35, Rev 2.1, 512KB L2 Cache
----------------------------------
Last question: when specifying exactly 100Kpps, there was no overruns, but
when we configed "ip route-cache distributed" on the GE interface, we
started to lose .3% packets. I always thought "ip route-cache distributed"
helps and doesn't hurt. Our testing shows the opposite. Why?
-Hank
This archive was generated by hypermail 2b29 : Sun Aug 04 2002 - 04:12:48 EDT