[c-nsp] 3560 buffering

Łukasz Bromirski lukasz at bromirski.net
Sat Oct 17 21:47:53 EDT 2009


On 2009-10-14 12:05, Peter Rathlev wrote:

> someswitch#sh platform port-asic stats enqueue gi0/1
>    Interface Gi0/1 TxQueue Enqueue Statistics
>      Queue 0
>        Weight 0 Frames 2
>      Queue 1
>        Weight 1 Frames 34736
>        Weight 2 Frames 318358119
>      Queue 2
>      Queue 3
>        Weight 2 Frames 425983701
> someswitch#
> It seems that all queues are actually used according to the default CoS
> map. I think I'm getting confused here. Can anybody shed light on this?

With the 'mls qos' not configured, the output TX queue is choosen by
looking at the ingress QoS label - which for this configuration has
value of 0 (sh platform qos label, remember the output is 0-based,
not 1-based, so for "Tx queue-htr" it will show "3-2", which means
fourth queue actually). Next, there's some traffic that will get into
TX Q1, as it's queue used by CPU to do it's control plane traffic
and this can't be changed as You noted.

By simply enabling 'mls qos', the traffic will be assigned a QoS
label of 1 and it means it will move to be serviced by TX Q2 in
default setting.

Hope this clears a bit.

As for the original poster problems with drops - I'd enable mls qos and
make sure the all traffic and buffers are allocated to TX Q1. Bear in
mind however bad things can happen for control-plane traffic in such
config if there's long oversubscription of the queue itself.

-- 
"Everything will be okay in the end. |                  Łukasz Bromirski
  If it's not okay, it's not the end. |       http://lukasz.bromirski.net


More information about the cisco-nsp mailing list