[c-nsp] WRR Confusion on 6748 blades

Peter Rathlev peter at rathlev.dk
Tue Jun 26 16:50:27 EDT 2012


On Tue, 2012-06-26 at 14:16 -0600, John Neiberger wrote:
> I'm getting conflicting information about how WRR scheduling and
> queueing works on 6748 blades. These blades have three regular queues
> and one priority queue. We've been told by two Cisco TAC engineers
> that if one queue is full, packets will start being dropped even if
> you have plenty of link bandwidth available.

That is correct: If the queue is full, packets are dropped. The question
is the: Why does the queue end up full if there's plenty of bandwidth
available?

> Our experience over the past few days dealing with related issues
> seems to bear this out. If a queue doesn't have enough bandwidth
> allotted to it, bad things happen even when the link has plenty of
> room left over.

Can you share the configuration from the interface in question together
with the output from "show interface GiX/Y" and "show queueing interface
GiX/Y"? And maybe "show flowcontrol interface GiX/Y" if you're using
flowcontrol.
> 
> However, someone else is telling me that traffic should be able to
> burst up to the link speed as long as the other queues are not full.

Correct. Keep in mind that queueing and bandwidth are two different
things working together. Packets are put in queues and queues are served
in a weighed round-robin fashion. If the amount of packets enqueued is
larger than what can be transmitted for this queue it starts to drop. As
long as there's available bandwidth all the WRR queues should be able to
send what they have.

> Our experience seems to support what we were told by Cisco, but we may
> just be looking at this the wrong way. It's possible that the queue
> only seems to be policed, but maybe most of the drops are from RED.
> I'm just not sure now.

RED (which is enabled by default) would introduce drops faster than
without. This might not be the best idea for non-core interfaces. If
your traffic is mostly BE (and thus hitting queue 1 threshold 1) you
start RED-dropping at 40% and tail-dropping at 70% of the queue buffer
space. And queue 1 has 50% of the interface buffers, which should be
583KB [0]. If my back-of-the-envolope calculation is right that's ~3.3ms
queuing for BE traffic (q1t1).

[0]: http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper09186a0080131086.html

-- 
Peter




More information about the cisco-nsp mailing list