[c-nsp] Need help w/ output drops on 7613 WS-X6748-GE-TX
Peter Rathlev
peter at rathlev.dk
Wed Jan 5 06:20:06 EST 2011
On Wed, 2011-01-05 at 12:47 +0200, Hank Nussbacher wrote:
> At 10:56 05/01/2011 +0100, Peter Rathlev wrote:
> >Do you have QoS enabled? What does "show queueing interface Gi9/29" tell
> >you?
...
> gp#show queueing interface Gi9/29
> Interface GigabitEthernet9/29 queueing strategy: Weighted Round-Robin
...
> WRR bandwidth ratios: 100[queue 1] 150[queue 2] 200[queue 3]
> queue-limit ratios: 50[queue 1] 20[queue 2] 15[queue 3] 15[Pri Queue]
...
> Packets dropped on Transmit:
>
> queue dropped [cos-map]
> ---------------------------------------------
> 1 1590686 [0 1 ]
> 2 250 [2 3 4 ]
> 3 0 [6 7 ]
> 4 0 [5 ]
...
> How would you recommend adjusting the interface mls queues?
Queue 1 has 50% of the buffers and the most drops. You could increase
queue 1 buffer size but that would of course be at the expense of the
other queues.
We've chosen to combine queues 1 and 2, since we don't really use a lot
of classes. We use the following interface commands:
interface GigabitEthernet4/1
wrr-queue cos-map 1 2 0 1 2 3 4
wrr-queue queue-limit 70 0 15
!
This gives 70% of the buffer space to queue one, and no space at all to
queue 2. The cos-map-command puts CoS 0-4 in queue one, so queue 2 isn't
used.
Caveat #1: The "wrr-queue cos-map" command propagates to all other ports
on same ASIC, typically blocks of 12 ports. So you can't have different
CoS maps on ports on the same ASIC.
Caveat #2: "wrr-queue queue-limit 70 0 15" reserves no space for queue
2, so any traffic happening to end up in that queue for any reason is
dropped.
Instead of starving queue 2 completely you could just adjust the
partitioning. Default as you can see is
50% queue 1 (CoS 0 + 1, typically Best Effort and Scavenger)
20% queue 2 (CoS 2 + 3 + 4, typically various Assured Forwarding)
15% queue 3 (CoS 6 + 7, "network" traffic (IGP etc))
15% queue 4 (priority/EF, CoS 5, voice and jitter sensitive traffic)
So 60/10/15/15 might also work. Or if you don't use EF much (or don't
need buffers for it) then 65/10/15/10.
Adjusting WRED threholds might also give good results, letting TCP back
off gracefully.
I don't know of any way to list interface buffer utilization, to trial
and error seems to be the only way.
--
Peter
More information about the cisco-nsp
mailing list