[c-nsp] ME3600X - tunning the output queues

Pshem Kowalczyk pshem.k at gmail.com
Tue Mar 26 17:05:09 EDT 2013


Hi,

We're running 15.3 already. We got the buffers to 2MB per service, but
still see occasional tail drop.

kind regards
Pshem


On 27 March 2013 02:26, George Giannousopoulos <ggiannou at gmail.com> wrote:
> Hi Pshem,
>
> We have seen the same issue with the 3800x
> In our case we use the maximum allowed packet number
>  queue-limit 2457 packets
>
> If I'm not mistaken, there are improvements coming to the default queue
> sizes with the 15.3 train
>
> George
>
> On Mon, Mar 25, 2013 at 4:25 AM, Pshem Kowalczyk <pshem.k at gmail.com> wrote:
>>
>> Hi,
>>
>> We have a couple of ME3600X (24cx) providing MPLS-based L2 services to
>> anywhere between 20 and 80 customers per chassis. For the last few
>> weeks we've been chasing a packet loss issue with some of those
>> customers. It looks like the issue is more likely to happen on
>> interfaces with multiple service instances then those with just a few.
>> In most extreme cases we have customers doing 700Mb/s on a single port
>> with the default queue depth (~ 50KB) and not a single dropped packet
>> one one hand and a bunch of <10Mb/s on another dropping packets all
>> the time.
>>
>> Initially we used the following QoS (per service instance):
>>
>> policy-map PM-CUST-DEFAULT-100M-OUT
>>  class class-default
>>   shape average 100000000
>>
>> This was causing massive drops even for services that were only
>> transmitting 5-15Mb/s. Since queue-depth couldn't be applied with just
>> the default class, we ended up with something like this:
>>
>> policy-map PM-CUST-DEFAULT-100M-OUT
>>  class CM-DUMMY
>>  class class-default
>>   shape average 100000000
>>   queue-limit 1536000 bytes
>>
>> (where CM-DUMMY matches non-existing qos-group).
>>
>> This made things significantly better, but I feel that the queue of
>> 1.5MB per service is quite excessive (bearing in mind that the device
>> has only 22MB in total for shared queues on 1G ports). I was told by
>> the TAC engineer that the memory is allocated dynamically, so it's
>> save to oversubscribe it.
>>
>> At this stage I'm still waiting to learn if its possible to monitor
>> the utilisation of that RAM.
>>
>> But the other question still lingers - what do you use as the
>> queue-limit? I know it's traffic-dependant but with only 3 profiles
>> available there is not much room to move (we use one profile for the
>> core-facing classes, this is the second one) and a fairly universal
>> depth has to be used. On top of that we don't really know what our
>> customers use the service for, so the visibility is very limited.
>>
>> So if you use the platform - what's your magic number?
>>
>> kind regards
>> Pshem
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>


More information about the cisco-nsp mailing list