[c-nsp] issues with 6500 platform

Jason Cardenas jason.cardenas79 at gmail.com
Tue Feb 8 12:09:18 EST 2011


On Tue, Feb 8, 2011 at 10:32 AM, Peter Rathlev <peter at rathlev.dk> wrote:

Peter,


> > #show platform hardware capacity
> ...
> > Interface Resources
> >   Interface drops:
> >     Module    Total drops:    Tx            Rx      Highest drop port:
>  Tx   Rx
> >     3                 3020928236             0
>  12   0
>
> That might be a problem. What does "show interface" counters tell you
> about "output drops", at least for Gi3/12 but probably also other
> interfaces? Are the 3,020,928,236 drops a significant amount compared to
> "packets output"?
>
>
G3/12 is just another interface toward customers, not really involved in the
"slowness". We have a fresh port for testing, no drops there.

G3/12 and other drops on interfaces attributed to things like DoSes, when
someone is sending us more than a Gig worth of traffic, normally here's what
we'd see during the day

GigabitEthernet3/12 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 0005.7496.3bdb (bia
0005.7496.3bdb)
  Description: to edgeXX.YY.ZZ
  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 96/255, rxload 96/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is SX
  input flow-control is off, output flow-control is on
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:47, output 00:00:14, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops:
1603304227
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 377820000 bits/sec, 565963 packets/sec
  30 second output rate 379561000 bits/sec, 580015 packets/sec
     5091739596822 packets input, 464570607530717 bytes, 0 no buffer
     Received 29287746 broadcasts (28661975 multicasts)
     217 runts, 0 giants, 0 throttles
     217 input errors, 217 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     5209469023387 packets output, 437251733337833 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

Total output drops counter stays the same at 1603304227. Same goes for all
other interfaces

Our typical interface looks like this:

interface GigabitEthernet3/12
 switchport
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 986
 switchport trunk allowed vlan 185,203
 switchport mode trunk
 logging event link-status
 load-interval 30
 speed nonegotiate


Try "show queueing interface Gi3/X" for every X with a significant
> amount of "output drops".
>
> It does not in itself explain why the drops are selective of course.
> Unless you have some complex-ish QoS-marking somewhere.
>

Gi3/16 -- one of the upstreams:

 Interface GigabitEthernet3/16 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled <--
  Port is untrusted
  Extend trust state: not trusted [COS = 0]
  Default COS is 0
    Queueing Mode In Tx direction: mode-cos
    Transmit queues [type = 1p2q2t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       1         WRR low             2
       2         WRR high            2
       3         Priority            1

    WRR bandwidth ratios:  100[queue 1] 255[queue 2]
    queue-limit ratios:     70[queue 1]  15[queue 2]  15[Pri Queue]*same as
Q2

    queue random-detect-min-thresholds
    ----------------------------------
      1    40[1] 70[2]
      2    40[1] 70[2]

    queue random-detect-max-thresholds
    ----------------------------------
      1    70[1] 100[2]
      2    70[1] 100[2]

    queue thresh cos-map
    ---------------------------------------
    1     1      0 1
    1     2      2 3
    2     1      4 6
    2     2      7
    3     1      5

    Queueing Mode In Rx direction: mode-cos
    Receive queues [type = 1p1q4t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       1         Standard            4
       2         Priority            1


    queue tail-drop-thresholds
    --------------------------
    1     100[1] 100[2] 100[3] 100[4]

    queue thresh cos-map
    ---------------------------------------
    1     1      0 1 2 3 4 5 6 7
    1     2
    1     3
    1     4
    2     1


  Packets dropped on Transmit:
    BPDU packets:  0

    queue thresh             dropped  [cos-map]
    ---------------------------------------------------
    1     1              1367164769  [0 1 ]
    1     2                       0  [2 3 ]
    2     1                       0  [4 6 ]
    2     2                       0* [7 ]
    3     1                       0* [5 ]
                                  * - shared transmit counter

  Packets dropped on Receive:
    BPDU packets:  0

    queue thresh              dropped  [cos-map]
    ---------------------------------------------------
    1     1                       0  [0 1 2 3 4 5 6 7 ]
                                  * - shared receive counter

> Pete Lumbus, we're looking into that direction; but we simply don't
> > have anything like "QoS, tcp adjust-mss" on our interfaces
>
> But you have QoS enabled globally. With no interface specific
> configuration you might have worse results than with no QoS at all.
>

If we disable QoS all together, would it likely to kill the box even for a
moment, like a second or two, or tree? We would certainly do this overnight
with staff present on site, but I would really like to know into what we may
be running here

What interface QoS options could we try for our upstream interfaces &
interfaces toward server farms that are running into this 'slowness' issue ?


Thank you! Much appreciated

Jason


More information about the cisco-nsp mailing list