[c-nsp] Cisco 7206VXR with NPE-G1
Rich Davies
rich.davies at gmail.com
Wed Jul 8 20:07:56 EDT 2015
All,
This is what I am seeing on 7206's with NPE-G1's running IOS 12.4(12b).
Also I might add that I have quite a few of these in service (all with same
rev of IOS, and all are variable traffic throughout a 24 hour timespan
(50-300Mbps):
6#show int gig 0/1
GigabitEthernet0/1 is up, line protocol is up
Hardware is BCM1250 Internal MAC, address is 0002.fcb7.f01b (bia
0002.fcb7.f01b)
MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
reliability 255/255, txload 158/255, rxload 163/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, media type is RJ45
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 17w6d
Input queue: 0/75/1386/38826 (size/max/drops/flushes); Total output
drops: 3120926
Queueing strategy: fifo
Output queue: 0/1000 (size/max)
30 second input rate 64012000 bits/sec, 34831 packets/sec
30 second output rate 62212000 bits/sec, 36408 packets/sec
186758885336 packets input, 46288188866751 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
8288 input errors, 0 CRC, 0 frame, 8288 overrun, 0 ignored
0 watchdog, 5778334 multicast, 0 pause input
0 input packets with dribble condition detected
206213751240 packets output, 49186222484144 bytes, 0 underruns
3 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
3 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
I've already swapped out GBICs, fiber, and had the Z-side of these
interfaces troubleshooted and nothing indicates hardware issue, yet the
input errors and overruns occur.
Also when I do a "show buffers" this is what I am seeing:
Buffer elements:
1851 in free list (1119 max allowed)
3838947704 hits, 0 misses, 619 created
Public buffer pools:
Small buffers, 104 bytes (total 69, permanent 50, peak 190 @ 7w0d):
58 in free list (20 min, 150 max allowed)
236662807 hits, 150799 misses, 119546 trims, 119565 created
31355 failures (0 no memory)
Middle buffers, 600 bytes (total 34, permanent 25, peak 265 @ 7w0d):
32 in free list (10 min, 150 max allowed)
99477866 hits, 130600 misses, 75340 trims, 75349 created
34516 failures (0 no memory)
Big buffers, 1536 bytes (total 50, permanent 50, peak 56 @ 7w0d):
50 in free list (5 min, 150 max allowed)
56218156 hits, 60 misses, 60 trims, 60 created
7 failures (0 no memory)
VeryBig buffers, 4520 bytes (total 10, permanent 10):
10 in free list (0 min, 100 max allowed)
7 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Large buffers, 5024 bytes (total 0, permanent 0):
0 in free list (0 min, 10 max allowed)
0 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Huge buffers, 18024 bytes (total 1, permanent 0, peak 11 @ 7w0d):
1 in free list (0 min, 4 max allowed)
97561 hits, 334 misses, 2460 trims, 2461 created
0 failures (0 no memory)
Interface buffer pools:
Syslog ED Pool buffers, 600 bytes (total 150, permanent 150):
118 in free list (150 min, 150 max allowed)
679286 hits, 0 misses
IPC buffers, 4096 bytes (total 2, permanent 2):
2 in free list (1 min, 8 max allowed)
0 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
Header pools:
Header buffers, 0 bytes (total 1256, permanent 256, peak 1256 @ 7w0d):
1000 in free list (256 min, 1024 max allowed)
20360150 hits, 675 misses, 0 trims, 1000 created
0 failures (0 no memory)
256 max cache size, 256 in cache
3913285762 hits in cache, 20359979 misses in cache
Particle Clones:
1024 clones, 780885 hits, 0 misses
Public particle pools:
F/S buffers, 128 bytes (total 512, permanent 512):
0 in free list (0 min, 512 max allowed)
512 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
512 max cache size, 512 in cache
780885 hits in cache, 0 misses in cache
Normal buffers, 512 bytes (total 2048, permanent 2048):
2048 in free list (1024 min, 4096 max allowed)
215320690 hits, 96662 misses, 98827 trims, 98827 created
0 failures (0 no memory)
Private particle pools:
GigabitEthernet0/1 buffers, 512 bytes (total 1000, permanent 1000):
0 in free list (0 min, 1000 max allowed)
1000 hits, 0 fallbacks
1000 max cache size, 872 in cache
838175508 hits in cache, 0 misses in cache
14 buffer threshold, 0 threshold transitions
GigabitEthernet0/2 buffers, 512 bytes (total 1000, permanent 1000):
0 in free list (0 min, 1000 max allowed)
1000 hits, 64151594 fallbacks
1000 max cache size, 870 in cache
662670440 hits in cache, 64151594 misses in cache
14 buffer threshold, 15169777 threshold transitions
GigabitEthernet0/3 buffers, 512 bytes (total 1000, permanent 1000):
0 in free list (0 min, 1000 max allowed)
1000 hits, 151169096 fallbacks
1000 max cache size, 810 in cache
3825225179 hits in cache, 151169096 misses in cache
14 buffer threshold, 7649015 threshold transitions
It looks like the Public particle pool "normal" buffers has some
misses/trims. On the private buffer particle polls I'm seeing alot of
fallbacks/misses but only on 2 GIG interfaces (and they are onboard the
NPE-G1, not on port adapters).
CPU on this NPE-G1 averages from 30-50% throughout a 24 hour timeframe.
Currently I am seeing:
#show proc cpu history
ALNA-RTR01-C7206 08:06:05 PM Wednesday Jul 8 2015 EST
444444444444444444444444444444444444444444444444444444444444
444444444445555544444555554444455555444444444444444444444444
100
90
80
70
60
50 ***** ***** *****
40 ************************************************************
30 ************************************************************
20 ************************************************************
10 ************************************************************
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per second (last 60 seconds)
444446444444444444444444444444444444334443343344444444343343
655577445788796742423424211452100300981117829920310012907918
100
90
80
70 *
60 *
50 *****# **#####* *
40 ############################################################
30 ############################################################
20 ############################################################
10 ############################################################
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per minute (last 60 minutes)
* = maximum CPU% # = average CPU%
655656566653333223224466556666668753322223224456666665665653322121233455
386138806758040754791293884097978146365043792761203737277995254769025256
100
90 *
80 *
70 * ** * ****** * * *
60 **** ****** ************ ************* **
50 *********#* ************* ************** **
40 #**########* ***##**########* ***##*#########* ****
30 ###########***********##############*** *****#############*** * ***##
20 ############*********###############*********###############*********###
10 ##############****####################*****###################*****#####
0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%
We have enough bandwidth available on all of these interfaces so either
this IOS version I am running has some issues or is this normal for the
NPE-G1's? I do not have a single port adapter even installed in any of the
6 slots on this router.
Thanks so much for all the input this has generated so far. It's nice to
hear everyone's opinion on this EOL yet still quite useful router.
Rich
On Wed, Jul 8, 2015 at 8:44 AM, Matthew Huff <mhuff at ox.com> wrote:
> Even with “ip options drop”, we are still seeing a fraction of packets
> being cpu switched (about 0.2% input and 1% output) even though we are
> using CEF. Looks like they are mostly door-knob packets destined for the
> router itself (which is ACL’d) or some other annoyance. We tuned the
> buffers to solve the cosmetic counter issue with input/output errors. Since
> we increased the buffers, the counters have been clean, and the “show
> buffers” did show a shortage of buffers, so even on the 7200 with
> particles, perhaps the CLI uses the buffers construct regardless. Maybe it
> was just a placebo effect.
>
> rtr-inet2#sh int gi0/1 stats
> GigabitEthernet0/1
> Switching path Pkts In Chars In Pkts Out Chars Out
> Processor 2824573 677403057 2958521 196941357
> Route cache 944226953 181329390 274701286 1591461341
> Total 947051526 858732447 277659807 1788402698
>
> rtr-inet2#sh ip cef switching statistics
>
> Path Reason Drop Punt Punt2Host
> RP LES Packet destined for us 0 4377823 0
> RP LES Total 0 4377823 0
>
> RP PAS No route 3923 0 0
> RP PAS Packet destined for us 0 4933417 8
> RP PAS No adjacency 3476 0 0
> RP PAS Incomplete adjacency 1204250 0 0
> RP PAS TTL expired 0 0 2709278
> RP PAS Routed to Null0 392389023 0 0
> RP PAS Features 151528682 0 22614
> RP PAS IP redirects 0 0 12358
> RP PAS Neighbor resolution req 399660 0 0
> RP PAS Total 545529014 4933417 2744258
>
> All Total 545529014 9311240 2744258
>
> On Jul 8, 2015, at 4:13 AM, Lukas Tribus <luky-37 at hotmail.com<mailto:
> luky-37 at hotmail.com>> wrote:
>
> "on a 10000". The 7200 uses particles, not buffers...
>
> Also, its not relevant for CEF-switched traffic. So unless your
> configuration
> requires fast-switching or process-switching, you don't need to worry about
> buffer/particle tuning at all.
>
>
> Lukas
>
>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list