[c-nsp] WRR Confusion on 6748 blades
John Neiberger
jneiberger at gmail.com
Wed Jun 27 15:43:40 EDT 2012
> Unfortunately, there are no 'absolute' per queue counters, only per queue
> drop counters. So no easy way to determine if other queues are being
> utilized unless you just 'know' (based on your classification policies and
> known application mix) or those queues overflow & drop.
>
>
>> >
>> >
>> <...snip...>
>
>
>
>> > This suggests to me that there is traffic in other queues contending for
>> > the
>> > available bandwidth, and that there's periodically instantaneous
>> > congestion.
>> > Alternatively you could try sizing this queue bigger and using the
>> > original
>> > bandwidth ratio. Or a combination of those two (tweaking both bandwidth
>> > &
>> > queue-limit).
>> >
>> > Is there some issue with changing the bandwidth ratio on this queue (ie,
>> > are
>> > you seeing collateral damage)? Else, seems like you've solved the
>> > problem
>> > already ;)
>>
>> Nope, we don't have a problem with it. That's what we've been doing.
>> We haven't really been adjusting the queue limit ratios, though. In
>> most cases, we were just changing the bandwidth ratio weights. I'm
>> looking at an interface right now where the 30-second weighted traffic
>> rate has never gone above around 150 Mbps but I'm still seeing OQDs in
>> one of the queues only. How do you think we should be interpreting
>> that?
>
>
>
>
> In my opinion, it indicates that:
> 1. there is traffic in the other queues contending for the link bandwidth
> 2. there is instantaneous oversubscription that causes the problem queue to
> fill as it's not being serviced frequently enough and/or is inadequately
> sized
> 3. the other queues are sized/weighted appropriately to handle the amount of
> traffic that maps to them (ie, even under congestion scenarios, there is
> adequate buffer to hold enough packets to avoid drops)
>
> If #1 was not true, then I don't see how changing the bandwidth ratio would
> make any difference at all - if there is no traffic in the other queues,
> then the single remaining active queue would get full unrestricted access to
> the full bandwidth of the link and no queuing would be necessary in the
> first place.
>
> Supposing there is no traffic in the other queues - in that case, you could
> certainly still have oversubscription of the single queue and drops, but
> changing the weight should have no effect on that scenario at all (while
> changing the q-limit certainly could).
>
>
> 2 cents,
> Tim
I just ran across an older thread where someone was having the same
problem. In his case, he had a 1-gig source and a 1-gig receiver on
the same switch with no output drops. He moved the receiver to another
switch that was connected to the first switch via a 10-gig link. That
resulted in output drops toward the receiver, apparently because of
the difference in serialization delay on the second switch, i.e. it
didn't take as long to bring in a packet on the 10-gig as it did to
send it on the 1-gig, so the buffers were filling with bursty traffic
at low apparent traffic rates.
This is very interesting stuff. Just a little complicated. :)
More information about the cisco-nsp
mailing list