[c-nsp] 2960S drops/packet loss

Andriy Bilous andriy.bilous at gmail.com
Mon Dec 26 07:45:33 EST 2011


I would recommend to isolate this port into queue-set 2, which isn't
used by default, and apply buffers tuning Anton suggested to this
queue-set, so you won't ruin queueing on other ports as they don't
drop anyway.

int g1/0/17
queue-set 2

IIRC new stackable 2960-S do not allow the configuration of input
queues, so you can skip this part. Also I would start with more
conservative values, reassigning buffers and scheduler time more
gracefully and observing drops (do you have functional network
management in place?) - it might require many iteration before you'll
find optimal values.

You can find the best known document about QoS on small catalysts here
https://supportforums.cisco.com/docs/DOC-8093

On Sun, Dec 25, 2011 at 9:24 PM, Anton Kapela <tkapela at gmail.com> wrote:
> On Wed, Dec 21, 2011 at 9:41 PM, John Elliot <johnelliot67 at hotmail.com> wrote:
>>
>> Hi Guys,
>>
>> Have a pair of 2960's in a stack, one port(trunk) connects to another DC and we are seeing ~5% packet-lossand large output drops to this DC.
>
> Wait up -- 2960, slight over-sub on trunk/uplink int, averages not
> close to line-rate, and yet tx discards? you don't say!
>
> Depending on the situation, default queue configs (both with and
> without 'mls qos' globally enabled) are fairly naive -- I suggest:
>
> -consider any need for actual QoS
>
> -re-apportion queue allocations accordingly (as the defaults are often
> not ideal for 'real' networks with 'real' RTT's)
>
> Here's what I've done on nearly every 2960G, S, 3650G, X, and
> 3750-whatever I've had to deal with given slight/moderate transmission
> oversub of any given int. The assumption in this config is that the
> network does not require fancy QoS, just high and less-high prio
> queues. That is, we can usually get away with a config that only
> includes enough queue isolation that bgp, ospf, voip/rtp all work
> stably, even when iperf and iscsi are slamming the shared links.
>
> Consider the following global adjustments -- which, in a nutshell,
> take any input dscp values below 39 and place then in queue 1, 40 and
> up queue 2. It then sets discard thresholds for queue 1 and 2 to be
> 3200% and 3100% of max (usually ~100 packets are buffered), which is
> what provides a bit more burst ride-through. Previous versions of
> 2960/3560/3750 code didn't permit more than 100% of normal max
> buffering, but recent code (12.2(50)SE, later) seems to support
> permitting any one port having greater access to the anemic shared
> buffer pool.
>
> mls qos map cos-dscp 0 8 16 24 32 46 48 56
> mls qos srr-queue input bandwidth 9 1
> mls qos srr-queue input threshold 1 90 100
> mls qos srr-queue input threshold 2 90 95
> mls qos srr-queue input buffers 95 5
> mls qos srr-queue input priority-queue 2 bandwidth 5
> mls qos srr-queue input cos-map queue 1 threshold 3  0 2 3 4
> mls qos srr-queue input cos-map queue 2 threshold 2  5
> mls qos srr-queue input cos-map queue 2 threshold 3  6 7
> mls qos srr-queue input dscp-map queue 1 threshold 3  0 1 2 3 4 5 6 7
> mls qos srr-queue input dscp-map queue 1 threshold 3  16 17 18 19 20 21 22 23
> mls qos srr-queue input dscp-map queue 1 threshold 3  24 25 26 27 28 29 30 31
> mls qos srr-queue input dscp-map queue 1 threshold 3  32 33 34 35 36 37 38 39
> mls qos srr-queue input dscp-map queue 2 threshold 2  40 41 42 43 44 45 46 47
> mls qos srr-queue input dscp-map queue 2 threshold 3  48 49 50 51 52 53 54 55
> mls qos srr-queue input dscp-map queue 2 threshold 3  56 57 58 59 60 61 62 63
> mls qos srr-queue output cos-map queue 1 threshold 3  5 6 7
> mls qos srr-queue output cos-map queue 2 threshold 3  0 2 3 4
> mls qos srr-queue output dscp-map queue 1 threshold 3  40 41 42 43 44 45 46 47
> mls qos srr-queue output dscp-map queue 1 threshold 3  48 49 50 51 52 53 54 55
> mls qos srr-queue output dscp-map queue 1 threshold 3  56 57 58 59 60 61 62 63
> mls qos srr-queue output dscp-map queue 2 threshold 3  0 1 2 3 4 5 6 7
> mls qos srr-queue output dscp-map queue 2 threshold 3  16 17 18 19 20 21 22 23
> mls qos srr-queue output dscp-map queue 2 threshold 3  24 25 26 27 28 29 30 31
> mls qos srr-queue output dscp-map queue 2 threshold 3  32 33 34 35 36 37 38 39
> mls qos queue-set output 1 threshold 1 3200 3200 100 3200
> mls qos queue-set output 1 threshold 2 3100 3100 100 3200
> mls qos queue-set output 1 buffers 5 95 0 0
> mls qos
>
> Then the per-interface adjustment, which doesn't really matter until
> the link is nearing saturation:
>
>  srr-queue bandwidth share 5 95 1 1
>  srr-queue bandwidth shape  0  0  0  0
>  priority-queue out
>  mls qos trust dscp
>
> Best of luck,
>
> -Tk
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list