[c-nsp] 6500 (Sup7203-bxl / 6724-SFP) Input queue drops

Justin Shore justin at justinshore.com
Mon Jan 11 11:44:40 EST 2010


joshua sahala wrote:
> drew,
> 
> it may or may not be related, but...check the output of 'sh counter
> int <int> [delta]' and look at the qos[1-21][In|Out]lost counters.
> 
> i was experiencing various drops due to the default interface (qos)
> buffer allocation:  basically, all of my traffic was hitting the 76xx
> swouter in the q0 buffer and overrunning it (there were no drops in
> any of the other qos queues because no traffic was ever hitting them).
>  i ended up having to rewrite the buffer mapping to allocate
> everything to q0 and the random discards stopped (at least the ones
> caused by this issue).

I want to revive an old thread if I can.  I'm facing a similar issue 
now.  Gi1/1 on my 6724s in my core 7600s (3BXL) connect to one of my 
border routers, a 7206 G1.  Both interfaces on both 6724s show large 
volumes of input drops and flushes.  Gi1/2 on the same 6724s connect to 
a 3845 which is my other border and it shows significantly lower drops 
and flushes (4 digits instead of 7 or 8).  All 4 links are SX.  'sh 
counters' didn't yield anything terribly interesting either.

7613-1.clr#sh counters interface gi1/1 delta | e = 0
Time since last clear
---------------------
never

64 bit counters:
  0.                      rxHCTotalPkts = 123760873738
  1.                      txHCTotalPkts = 45947101814
  2.                    rxHCUnicastPkts = 123747989684
  3.                    txHCUnicastPkts = 45941233718
  4.                  rxHCMulticastPkts = 12883997
  5.                  txHCMulticastPkts = 5868073
  6.                  rxHCBroadcastPkts = 57
  7.                  txHCBroadcastPkts = 23
  8.                         rxHCOctets = 101377579108374
  9.                         txHCOctets = 16976124978053
10.                 rxTxHCPkts64Octets = 8893600878
11.            rxTxHCPkts65to127Octets = 57698604883
12.           rxTxHCPkts128to255Octets = 20633513794
13.           rxTxHCPkts256to511Octets = 7123204457
14.          rxTxHCpkts512to1023Octets = 6652027912
15.         rxTxHCpkts1024to1518Octets = 26440990980

32 bit counters:
  2.                    rxOversizedPkts = 2492150694
13.                         linkChange = 2
All Port Counters
  1.                          InPackets = 123760839646
  2.                           InOctets = 101377556782449
  3.                        InUcastPkts = 123747955595
  4.                        InMcastPkts = 12883994
  5.                        InBcastPkts = 57
  6.                         OutPackets = 45947087810
  7.                          OutOctets = 16976121260975
  8.                       OutUcastPkts = 45941219715
  9.                       OutMcastPkts = 5868072
10.                       OutBcastPkts = 23
22.                             Giants = 2492143293
35.                 rxTxHCPkts64Octets = 8893600875
36.            rxTxHCPkts65to127Octets = 57698582793
37.           rxTxHCPkts128to255Octets = 20633505929
38.           rxTxHCPkts256to511Octets = 7123201908
39.          rxTxHCpkts512to1023Octets = 6652026348
40.         rxTxHCpkts1024to1518Octets = 26440984821
44.                      OversizedPkts = 2492143293


The giants are explained by the MTU I have on those links.  I run 9000 
on all infrastructure links.  Other than that I don't see anything else 
wrong.  All the QoS Lost lines were 0.  All infrastructure interfaces 
are also MPLS enabled.  The 7206 carries the bulk of the Internet 
traffic as does 7600 #1 so it's not a big surprise to see its links 
affected much more so than the 3845 links.

I'm graphing interface errors/discards with Cacti.  I have to question 
the numbers it's giving me though.  They have never seemed to be 
accurate to me on any of my interfaces.

Are my queues not deep enough to carry the traffic flow?  Peak Mbps on 
through the 7206 is about 120Mbps and if Cacti is right then we're also 
only talking about 17,000 pps on the upstream-facing interface of the 
7206, most of which would come from 7600 #1.  Thoughts?

Thanks
  Justin


More information about the cisco-nsp mailing list